Jan 28 01:15:31.511188 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 01:15:31.511222 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:15:31.511349 kernel: BIOS-provided physical RAM map: Jan 28 01:15:31.511361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:15:31.511371 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:15:31.511381 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:15:31.511440 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:15:31.511449 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:15:31.511458 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 28 01:15:31.511467 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 28 01:15:31.511480 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 28 01:15:31.511489 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 28 01:15:31.511498 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 28 01:15:31.511507 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 28 01:15:31.511518 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 28 01:15:31.511528 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:15:31.511541 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 28 01:15:31.511550 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 28 01:15:31.511560 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:15:31.511569 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:15:31.511579 kernel: NX (Execute Disable) protection: active Jan 28 01:15:31.511589 kernel: APIC: Static calls initialized Jan 28 01:15:31.511599 kernel: efi: EFI v2.7 by EDK II Jan 28 01:15:31.511609 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 28 01:15:31.511621 kernel: SMBIOS 2.8 present. Jan 28 01:15:31.511632 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 28 01:15:31.511643 kernel: Hypervisor detected: KVM Jan 28 01:15:31.511660 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:15:31.511671 kernel: kvm-clock: using sched offset of 16723278539 cycles Jan 28 01:15:31.511683 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:15:31.511695 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:15:31.511706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:15:31.511718 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:15:31.511729 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 28 01:15:31.511741 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:15:31.511753 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:15:31.511769 kernel: Using GB pages for direct mapping Jan 28 01:15:31.512044 kernel: Secure boot disabled Jan 28 01:15:31.512056 kernel: ACPI: Early table checksum verification disabled Jan 28 01:15:31.512068 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:15:31.512085 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:15:31.512094 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512110 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512121 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:15:31.512130 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512142 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512154 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512166 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:15:31.512177 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:15:31.512186 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:15:31.512204 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:15:31.512215 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:15:31.512227 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:15:31.512332 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:15:31.512344 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:15:31.512354 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:15:31.512363 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:15:31.512375 kernel: No NUMA configuration found Jan 28 01:15:31.512385 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 28 01:15:31.512402 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 28 01:15:31.512413 kernel: Zone ranges: Jan 28 01:15:31.512422 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:15:31.512434 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 28 01:15:31.512445 kernel: Normal empty Jan 28 01:15:31.512456 kernel: Movable zone start for each node Jan 28 01:15:31.512468 kernel: Early memory node ranges Jan 28 01:15:31.512478 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:15:31.512489 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:15:31.512506 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:15:31.512518 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 28 01:15:31.512527 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 28 01:15:31.512538 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 28 01:15:31.512549 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 28 01:15:31.512561 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:15:31.512571 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:15:31.512581 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:15:31.512593 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:15:31.512608 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 28 01:15:31.512619 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 28 01:15:31.512629 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 28 01:15:31.512640 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:15:31.512652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:15:31.512663 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:15:31.512673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:15:31.512683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:15:31.512695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:15:31.512711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:15:31.512721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:15:31.512732 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:15:31.512743 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:15:31.512755 kernel: TSC deadline timer available Jan 28 01:15:31.512765 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 01:15:31.512826 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:15:31.512840 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:15:31.512851 kernel: kvm-guest: setup PV sched yield Jan 28 01:15:31.512867 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 28 01:15:31.512877 kernel: Booting paravirtualized kernel on KVM Jan 28 01:15:31.513120 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:15:31.513134 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:15:31.513146 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 01:15:31.513155 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 01:15:31.513165 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:15:31.513176 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:15:31.513187 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:15:31.513206 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:15:31.513216 kernel: random: crng init done Jan 28 01:15:31.513227 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:15:31.513337 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:15:31.513349 kernel: Fallback order for Node 0: 0 Jan 28 01:15:31.513359 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 28 01:15:31.513371 kernel: Policy zone: DMA32 Jan 28 01:15:31.513383 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:15:31.513396 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 28 01:15:31.513412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:15:31.513423 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 01:15:31.513434 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 01:15:31.513446 kernel: Dynamic Preempt: voluntary Jan 28 01:15:31.513458 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:15:31.513486 kernel: rcu: RCU event tracing is enabled. Jan 28 01:15:31.513504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:15:31.513517 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:15:31.513530 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:15:31.513543 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:15:31.513556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:15:31.513573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:15:31.513584 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:15:31.513594 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:15:31.513604 kernel: Console: colour dummy device 80x25 Jan 28 01:15:31.513614 kernel: printk: console [ttyS0] enabled Jan 28 01:15:31.513629 kernel: ACPI: Core revision 20230628 Jan 28 01:15:31.513640 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:15:31.513650 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:15:31.513660 kernel: x2apic enabled Jan 28 01:15:31.513670 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:15:31.513681 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:15:31.513693 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:15:31.513705 kernel: kvm-guest: setup PV IPIs Jan 28 01:15:31.513718 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:15:31.513734 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 01:15:31.513746 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:15:31.513757 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:15:31.513768 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:15:31.513850 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:15:31.513863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:15:31.513873 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:15:31.513884 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:15:31.513897 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:15:31.513915 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:15:31.513929 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:15:31.513942 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:15:31.513952 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:15:31.513961 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:15:31.513975 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:15:31.513987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:15:31.514327 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:15:31.514345 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:15:31.514358 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:15:31.514371 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:15:31.514385 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:15:31.514396 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:15:31.514408 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:15:31.514418 kernel: landlock: Up and running. Jan 28 01:15:31.514430 kernel: SELinux: Initializing. Jan 28 01:15:31.514441 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:15:31.514456 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:15:31.514468 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:15:31.514479 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:15:31.514491 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:15:31.514502 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:15:31.514513 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:15:31.514524 kernel: signal: max sigframe size: 1776 Jan 28 01:15:31.514535 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:15:31.514547 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:15:31.514561 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:15:31.514571 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:15:31.514583 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:15:31.514595 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:15:31.514607 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:15:31.514618 kernel: smpboot: Max logical packages: 1 Jan 28 01:15:31.514631 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:15:31.514643 kernel: devtmpfs: initialized Jan 28 01:15:31.514656 kernel: x86/mm: Memory block size: 128MB Jan 28 01:15:31.514672 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:15:31.514682 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:15:31.514694 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 28 01:15:31.514706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:15:31.514717 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:15:31.514729 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:15:31.514741 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:15:31.514751 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:15:31.514764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:15:31.514849 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:15:31.514862 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:15:31.514874 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:15:31.514886 kernel: audit: type=2000 audit(1769562910.969:1): state=initialized audit_enabled=0 res=1 Jan 28 01:15:31.514896 kernel: cpuidle: using governor menu Jan 28 01:15:31.514908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:15:31.514920 kernel: dca service started, version 1.12.1 Jan 28 01:15:31.514933 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 01:15:31.514943 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:15:31.514960 kernel: PCI: Using configuration type 1 for base access Jan 28 01:15:31.514972 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:15:31.514985 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:15:31.514995 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:15:31.515007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:15:31.515019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:15:31.515031 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:15:31.515041 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:15:31.515052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:15:31.515069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:15:31.515081 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 01:15:31.515091 kernel: ACPI: Interpreter enabled Jan 28 01:15:31.515451 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:15:31.515463 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:15:31.515475 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:15:31.515486 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:15:31.515533 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:15:31.515543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:15:31.516040 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:15:31.516576 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:15:31.516716 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:15:31.516727 kernel: PCI host bridge to bus 0000:00 Jan 28 01:15:31.517535 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:15:31.517708 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:15:31.520765 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:15:31.521439 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:15:31.521634 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:15:31.521883 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 28 01:15:31.523750 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:15:31.524429 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 01:15:31.524643 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 01:15:31.527020 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:15:31.527224 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 28 01:15:31.527526 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 28 01:15:31.527707 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 28 01:15:31.527944 kernel: pci 0000:00:01.0: efifb_fixup_resources+0x0/0x140 took 20507 usecs Jan 28 01:15:31.528136 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:15:31.528452 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 20507 usecs Jan 28 01:15:31.528671 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 01:15:31.538163 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 28 01:15:31.538462 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 28 01:15:31.538645 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 28 01:15:31.544421 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 01:15:31.544626 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 28 01:15:31.546377 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 28 01:15:31.546554 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 28 01:15:31.546726 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 01:15:31.551097 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 28 01:15:31.551391 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 28 01:15:31.551591 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 28 01:15:31.551841 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 28 01:15:31.552030 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 01:15:31.552218 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:15:31.552581 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 01:15:31.552764 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 28 01:15:31.560574 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 28 01:15:31.561018 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 01:15:31.561207 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 28 01:15:31.561336 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:15:31.561351 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:15:31.561364 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:15:31.561375 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:15:31.561384 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:15:31.561394 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:15:31.561403 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:15:31.561413 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:15:31.561423 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:15:31.561443 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:15:31.561452 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:15:31.561462 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:15:31.561471 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:15:31.561481 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:15:31.561491 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:15:31.561502 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:15:31.561513 kernel: iommu: Default domain type: Translated Jan 28 01:15:31.561525 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:15:31.561541 kernel: efivars: Registered efivars operations Jan 28 01:15:31.561551 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:15:31.561560 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:15:31.561570 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:15:31.561579 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 28 01:15:31.561589 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 28 01:15:31.561601 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 28 01:15:31.567977 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:15:31.568175 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:15:31.568468 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:15:31.568485 kernel: vgaarb: loaded Jan 28 01:15:31.568495 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:15:31.568506 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:15:31.568518 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:15:31.568530 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:15:31.568540 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:15:31.568550 kernel: pnp: PnP ACPI init Jan 28 01:15:31.568845 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:15:31.568869 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:15:31.568879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:15:31.568892 kernel: NET: Registered PF_INET protocol family Jan 28 01:15:31.568905 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:15:31.568915 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:15:31.568925 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:15:31.568935 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:15:31.568944 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:15:31.568959 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:15:31.568971 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:15:31.568983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:15:31.568995 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:15:31.569007 kernel: NET: Registered PF_XDP protocol family Jan 28 01:15:31.569189 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 28 01:15:31.569475 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 28 01:15:31.569646 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:15:31.571076 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:15:31.572878 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:15:31.573894 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:15:31.576401 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:15:31.576892 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 28 01:15:31.576914 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:15:31.576980 kernel: Initialise system trusted keyrings Jan 28 01:15:31.576993 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:15:31.577009 kernel: Key type asymmetric registered Jan 28 01:15:31.577544 kernel: Asymmetric key parser 'x509' registered Jan 28 01:15:31.577560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 01:15:31.577573 kernel: io scheduler mq-deadline registered Jan 28 01:15:31.577585 kernel: io scheduler kyber registered Jan 28 01:15:31.577598 kernel: io scheduler bfq registered Jan 28 01:15:31.577609 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:15:31.577624 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:15:31.577636 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:15:31.577650 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:15:31.578599 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:15:31.578615 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:15:31.578627 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:15:31.578640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:15:31.578652 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:15:31.578665 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:15:31.579697 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:15:31.580742 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:15:31.581961 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:15:28 UTC (1769562928) Jan 28 01:15:31.582199 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:15:31.582221 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:15:31.582326 kernel: efifb: probing for efifb Jan 28 01:15:31.582338 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 28 01:15:31.582347 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 28 01:15:31.582357 kernel: efifb: scrolling: redraw Jan 28 01:15:31.582367 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 28 01:15:31.582377 kernel: Console: switching to colour frame buffer device 100x37 Jan 28 01:15:31.582393 kernel: fb0: EFI VGA frame buffer device Jan 28 01:15:31.582403 kernel: pstore: Using crash dump compression: deflate Jan 28 01:15:31.582413 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:15:31.582424 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:15:31.582434 kernel: Segment Routing with IPv6 Jan 28 01:15:31.582444 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:15:31.582455 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:15:31.582488 kernel: Key type dns_resolver registered Jan 28 01:15:31.582502 kernel: IPI shorthand broadcast: enabled Jan 28 01:15:31.582517 kernel: sched_clock: Marking stable (8488170969, 5930344725)->(18102788937, -3684273243) Jan 28 01:15:31.582528 kernel: registered taskstats version 1 Jan 28 01:15:31.583640 kernel: Loading compiled-in X.509 certificates Jan 28 01:15:31.583656 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 01:15:31.583667 kernel: Key type .fscrypt registered Jan 28 01:15:31.583681 kernel: Key type fscrypt-provisioning registered Jan 28 01:15:31.583691 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:15:31.583701 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:15:31.583711 kernel: ima: No architecture policies found Jan 28 01:15:31.583730 kernel: clk: Disabling unused clocks Jan 28 01:15:31.583742 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 01:15:31.583755 kernel: Write protecting the kernel read-only data: 36864k Jan 28 01:15:31.583766 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 01:15:31.583846 kernel: Run /init as init process Jan 28 01:15:31.583857 kernel: with arguments: Jan 28 01:15:31.583868 kernel: /init Jan 28 01:15:31.583879 kernel: with environment: Jan 28 01:15:31.583890 kernel: HOME=/ Jan 28 01:15:31.583910 kernel: TERM=linux Jan 28 01:15:31.583923 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:15:31.583936 systemd[1]: Detected virtualization kvm. Jan 28 01:15:31.583947 systemd[1]: Detected architecture x86-64. Jan 28 01:15:31.583961 systemd[1]: Running in initrd. Jan 28 01:15:31.583974 systemd[1]: No hostname configured, using default hostname. Jan 28 01:15:31.583984 systemd[1]: Hostname set to . Jan 28 01:15:31.583999 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:15:31.584010 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:15:31.584023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:15:31.584037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:15:31.584049 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:15:31.584067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:15:31.584080 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:15:31.584094 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:15:31.584107 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:15:31.584115 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:15:31.584123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:15:31.584133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:15:31.584141 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:15:31.584149 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:15:31.584156 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:15:31.584163 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:15:31.584171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:15:31.584178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:15:31.584185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:15:31.584193 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:15:31.584202 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:15:31.584210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:15:31.584217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:15:31.584224 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:15:31.584617 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:15:31.584634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:15:31.584648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:15:31.584659 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:15:31.584670 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:15:31.584687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:15:31.584734 systemd-journald[194]: Collecting audit messages is disabled. Jan 28 01:15:31.584764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:15:31.586103 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:15:31.586126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:15:31.586137 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:15:31.586152 systemd-journald[194]: Journal started Jan 28 01:15:31.586183 systemd-journald[194]: Runtime Journal (/run/log/journal/0dbe037142e24d008b920bf4f5c768b5) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:15:31.615401 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:15:31.620359 systemd-modules-load[195]: Inserted module 'overlay' Jan 28 01:15:31.637578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:15:31.661495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:15:31.694505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:31.713503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:15:31.831982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:15:31.833500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:15:31.843018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:15:31.854942 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:15:31.901309 kernel: Bridge firewalling registered Jan 28 01:15:31.908206 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 28 01:15:31.913871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:15:31.927930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:15:31.956626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:15:31.993989 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:15:32.054403 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:15:32.069375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:15:32.107527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:15:32.169697 dracut-cmdline[228]: dracut-dracut-053 Jan 28 01:15:32.215663 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:15:32.257563 systemd-resolved[231]: Positive Trust Anchors: Jan 28 01:15:32.257575 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:15:32.257617 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:15:32.266704 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 28 01:15:32.269203 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:15:32.436198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:15:32.611999 kernel: SCSI subsystem initialized Jan 28 01:15:32.640614 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:15:32.721402 kernel: iscsi: registered transport (tcp) Jan 28 01:15:32.780702 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:15:32.780840 kernel: QLogic iSCSI HBA Driver Jan 28 01:15:33.011878 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:15:33.050694 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:15:33.182666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:15:33.182732 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:15:33.188759 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:15:33.311411 kernel: raid6: avx2x4 gen() 8605 MB/s Jan 28 01:15:33.337704 kernel: raid6: avx2x2 gen() 14069 MB/s Jan 28 01:15:33.365953 kernel: raid6: avx2x1 gen() 10770 MB/s Jan 28 01:15:33.366036 kernel: raid6: using algorithm avx2x2 gen() 14069 MB/s Jan 28 01:15:33.394485 kernel: raid6: .... xor() 10354 MB/s, rmw enabled Jan 28 01:15:33.394564 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:15:33.459191 kernel: xor: automatically using best checksumming function avx Jan 28 01:15:34.235489 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:15:34.295747 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:15:34.335617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:15:34.426733 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 28 01:15:34.439342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:15:34.480210 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:15:34.575972 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 28 01:15:34.722476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:15:34.773710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:15:35.058335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:15:35.111109 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:15:35.165946 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:15:35.188138 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:15:35.196339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:15:35.210187 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:15:35.256647 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:15:35.304912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:15:35.307033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:15:35.326617 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:15:35.385623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:15:35.433933 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:15:35.434423 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:15:35.387735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:35.480013 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 01:15:35.480403 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:15:35.480429 kernel: GPT:9289727 != 19775487 Jan 28 01:15:35.480446 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:15:35.480475 kernel: GPT:9289727 != 19775487 Jan 28 01:15:35.480491 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:15:35.480508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:15:35.408775 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:15:35.572986 kernel: libata version 3.00 loaded. Jan 28 01:15:35.575502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:15:35.587193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:15:35.733331 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (470) Jan 28 01:15:35.746971 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Jan 28 01:15:35.754479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:35.820318 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:15:35.820601 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:15:35.826740 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 01:15:35.834156 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:15:35.868552 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 01:15:35.868974 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:15:35.869193 kernel: AES CTR mode by8 optimization enabled Jan 28 01:15:35.875779 kernel: scsi host0: ahci Jan 28 01:15:35.876216 kernel: scsi host1: ahci Jan 28 01:15:35.883733 kernel: scsi host2: ahci Jan 28 01:15:35.887329 kernel: scsi host3: ahci Jan 28 01:15:35.895562 kernel: scsi host4: ahci Jan 28 01:15:35.913539 kernel: scsi host5: ahci Jan 28 01:15:35.895500 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:15:35.975071 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 28 01:15:35.975101 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 28 01:15:35.975117 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 28 01:15:35.975131 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 28 01:15:35.975145 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 28 01:15:35.987523 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 28 01:15:35.989010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:15:36.054122 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 01:15:36.113165 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:15:36.168194 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:15:36.176599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:15:36.176688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:36.384914 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:15:36.384953 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:15:36.384968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:15:36.384992 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:15:36.202590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:15:36.390919 disk-uuid[557]: Primary Header is updated. Jan 28 01:15:36.390919 disk-uuid[557]: Secondary Entries is updated. Jan 28 01:15:36.390919 disk-uuid[557]: Secondary Header is updated. Jan 28 01:15:36.548886 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:15:36.548922 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:15:36.548949 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:15:36.548966 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:15:36.548981 kernel: ata3.00: applying bridge limits Jan 28 01:15:36.548996 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:15:36.549010 kernel: ata3.00: configured for UDMA/100 Jan 28 01:15:36.549024 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:15:36.248787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:15:36.642576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:36.736130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:15:36.864157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:15:37.022534 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:15:37.022941 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:15:37.061326 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:15:37.485337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:15:37.497219 disk-uuid[558]: The operation has completed successfully. Jan 28 01:15:37.719630 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:15:37.719903 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:15:37.772855 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:15:37.824786 sh[598]: Success Jan 28 01:15:38.035012 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 01:15:38.282008 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:15:38.292903 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:15:38.369749 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:15:38.460945 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 01:15:38.461022 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:15:38.474503 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:15:38.474578 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:15:38.480018 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:15:38.604890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:15:38.610790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:15:38.690697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:15:38.719517 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:15:38.815941 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:15:38.816019 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:15:38.843108 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:15:38.893003 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:15:38.990657 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:15:39.010855 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:15:39.088698 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:15:39.165695 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:15:39.743697 ignition[712]: Ignition 2.19.0 Jan 28 01:15:39.743715 ignition[712]: Stage: fetch-offline Jan 28 01:15:39.743771 ignition[712]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:39.743785 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:39.770100 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:15:39.748706 ignition[712]: parsed url from cmdline: "" Jan 28 01:15:39.748713 ignition[712]: no config URL provided Jan 28 01:15:39.748722 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:15:39.748739 ignition[712]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:15:39.748904 ignition[712]: op(1): [started] loading QEMU firmware config module Jan 28 01:15:39.748912 ignition[712]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:15:39.808694 ignition[712]: op(1): [finished] loading QEMU firmware config module Jan 28 01:15:40.024003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:15:40.174441 systemd-networkd[787]: lo: Link UP Jan 28 01:15:40.174488 systemd-networkd[787]: lo: Gained carrier Jan 28 01:15:40.238609 systemd-networkd[787]: Enumeration completed Jan 28 01:15:40.243734 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:15:40.252693 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:15:40.252699 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:15:40.314405 systemd-networkd[787]: eth0: Link UP Jan 28 01:15:40.314414 systemd-networkd[787]: eth0: Gained carrier Jan 28 01:15:40.314431 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:15:40.360143 systemd[1]: Reached target network.target - Network. Jan 28 01:15:40.454469 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:15:40.837145 ignition[712]: parsing config with SHA512: cd2874d84743bef64370a2f76e95db666a2955227fb8fd42df9a953662ef6d754d4ba49a40ef786e58c20b2bc54c22ebf169257101108a113c24ae3180bd209a Jan 28 01:15:40.850911 unknown[712]: fetched base config from "system" Jan 28 01:15:40.851542 ignition[712]: fetch-offline: fetch-offline passed Jan 28 01:15:40.850936 unknown[712]: fetched user config from "qemu" Jan 28 01:15:40.851636 ignition[712]: Ignition finished successfully Jan 28 01:15:40.871466 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:15:40.907565 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:15:40.937137 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:15:41.043703 ignition[791]: Ignition 2.19.0 Jan 28 01:15:41.043721 ignition[791]: Stage: kargs Jan 28 01:15:41.054077 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:41.054095 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:41.059303 ignition[791]: kargs: kargs passed Jan 28 01:15:41.059375 ignition[791]: Ignition finished successfully Jan 28 01:15:41.097663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:15:41.137340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:15:41.229432 ignition[799]: Ignition 2.19.0 Jan 28 01:15:41.229480 ignition[799]: Stage: disks Jan 28 01:15:41.241752 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:15:41.229709 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:41.253375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:15:41.229725 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:41.258071 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:15:41.234811 ignition[799]: disks: disks passed Jan 28 01:15:41.258136 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:15:41.234947 ignition[799]: Ignition finished successfully Jan 28 01:15:41.258189 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:15:41.258227 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:15:41.429693 systemd-networkd[787]: eth0: Gained IPv6LL Jan 28 01:15:41.478209 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:15:41.645720 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 01:15:41.695729 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:15:41.726383 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:15:42.628661 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 01:15:42.631681 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:15:42.647486 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:15:42.683074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:15:42.705589 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:15:42.725007 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:15:42.725076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:15:42.725105 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:15:42.753651 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Jan 28 01:15:42.753685 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:15:42.753699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:15:42.753714 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:15:42.809127 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:15:42.837728 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:15:42.848981 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:15:42.857610 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:15:43.120931 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:15:43.151721 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:15:43.185450 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:15:43.205798 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:15:43.859200 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:15:43.881202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:15:43.908656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:15:43.920956 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:15:43.929751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:15:44.044790 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:15:44.049811 ignition[930]: INFO : Ignition 2.19.0 Jan 28 01:15:44.049811 ignition[930]: INFO : Stage: mount Jan 28 01:15:44.049811 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:44.049811 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:44.049811 ignition[930]: INFO : mount: mount passed Jan 28 01:15:44.049811 ignition[930]: INFO : Ignition finished successfully Jan 28 01:15:44.070813 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:15:44.118433 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:15:44.166030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:15:44.215571 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 28 01:15:44.230072 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:15:44.230125 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:15:44.251491 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:15:44.284761 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:15:44.301588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:15:44.471160 ignition[961]: INFO : Ignition 2.19.0 Jan 28 01:15:44.471160 ignition[961]: INFO : Stage: files Jan 28 01:15:44.490677 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:44.490677 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:44.490677 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:15:44.524089 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:15:44.524089 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:15:44.546557 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:15:44.546557 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:15:44.562225 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:15:44.548312 unknown[961]: wrote ssh authorized keys file for user: core Jan 28 01:15:44.577911 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:15:44.577911 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 01:15:44.577911 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:15:44.577911 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:15:44.712436 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 01:15:45.354451 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:15:45.375706 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 01:15:45.375706 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 28 01:15:45.580023 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 28 01:15:46.517992 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:15:46.537131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:15:46.965081 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 28 01:15:50.728606 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:15:50.728606 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 28 01:15:50.764210 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:15:51.270437 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:15:51.341514 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:15:51.365159 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:15:51.365159 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:15:51.365159 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:15:51.365159 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:15:51.365159 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:15:51.365159 ignition[961]: INFO : files: files passed Jan 28 01:15:51.365159 ignition[961]: INFO : Ignition finished successfully Jan 28 01:15:51.489848 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:15:51.537607 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:15:51.573561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:15:51.592765 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:15:51.627188 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:15:51.721394 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:15:51.754644 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:15:51.792223 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:15:51.792223 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:15:51.854501 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:15:51.905035 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:15:51.951762 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:15:52.175158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:15:52.182773 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:15:52.217354 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:15:52.246650 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:15:52.255787 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:15:52.314442 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:15:52.461927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:15:52.533687 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:15:52.613696 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:15:52.614150 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:15:52.655816 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:15:52.666396 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:15:52.675719 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:15:52.676510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:15:52.676656 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:15:52.676782 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:15:52.677669 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:15:52.677855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:15:52.678056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:15:52.678182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:15:52.678435 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:15:52.700470 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:15:52.706477 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:15:53.048632 ignition[1016]: INFO : Ignition 2.19.0 Jan 28 01:15:53.048632 ignition[1016]: INFO : Stage: umount Jan 28 01:15:53.048632 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:15:53.048632 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:15:53.048632 ignition[1016]: INFO : umount: umount passed Jan 28 01:15:53.048632 ignition[1016]: INFO : Ignition finished successfully Jan 28 01:15:53.136480 kernel: hrtimer: interrupt took 5837458 ns Jan 28 01:15:52.709670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:15:52.711516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:15:52.714971 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:15:52.722625 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:15:52.722970 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:15:52.723534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:15:52.723764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:15:52.724009 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:15:52.728365 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:15:52.728525 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:15:52.736640 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:15:52.737338 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:15:52.743104 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:15:52.743378 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:15:52.748415 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:15:52.749402 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:15:52.749561 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:15:52.759026 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:15:52.760978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:15:52.769571 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:15:52.770979 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:15:52.771776 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:15:52.775488 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:15:52.848449 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:15:52.868749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:15:52.869135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:15:52.893603 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:15:52.906545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:15:52.907582 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:15:52.911709 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:15:52.912075 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:15:53.013470 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:15:53.013685 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:15:53.066658 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:15:53.067047 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:15:53.101450 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:15:53.109715 systemd[1]: Stopped target network.target - Network. Jan 28 01:15:53.121756 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:15:53.121854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:15:53.136746 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:15:53.136961 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:15:53.147049 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:15:53.147132 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:15:53.152676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:15:53.152752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:15:53.170829 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:15:53.178203 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:15:53.193692 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:15:53.196189 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:15:53.210831 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 28 01:15:53.223696 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:15:53.223961 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:15:53.231183 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:15:53.231485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:15:53.237669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:15:53.237754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:15:53.240996 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:15:53.241071 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:15:53.286489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:15:53.313375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:15:53.313496 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:15:53.335507 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:15:53.335603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:15:53.343643 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:15:53.343730 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:15:53.356222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:15:53.356822 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:15:53.367458 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:15:53.440960 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:15:53.448701 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:15:53.541676 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:15:53.541921 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:15:53.584696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:15:53.584814 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:15:53.593194 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:15:53.593372 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:15:53.875008 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:15:53.883845 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:15:53.929206 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:15:53.929533 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:15:54.140984 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:15:54.141119 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:15:54.240741 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:15:54.248787 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:15:54.248951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:15:54.263072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:15:54.263166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:15:54.313800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:15:54.314067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:15:54.371930 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:15:54.489210 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:15:54.544403 systemd[1]: Switching root. Jan 28 01:15:54.641928 systemd-journald[194]: Journal stopped Jan 28 01:16:01.289414 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 28 01:16:01.289502 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:16:01.289527 kernel: SELinux: policy capability open_perms=1 Jan 28 01:16:01.289542 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:16:01.289557 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:16:01.289572 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:16:01.289587 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:16:01.289608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:16:01.289626 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:16:01.289643 kernel: audit: type=1403 audit(1769562955.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:16:01.289659 systemd[1]: Successfully loaded SELinux policy in 183.861ms. Jan 28 01:16:01.289688 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.561ms. Jan 28 01:16:01.289705 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:16:01.289721 systemd[1]: Detected virtualization kvm. Jan 28 01:16:01.289737 systemd[1]: Detected architecture x86-64. Jan 28 01:16:01.289753 systemd[1]: Detected first boot. Jan 28 01:16:01.289772 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:16:01.289788 zram_generator::config[1077]: No configuration found. Jan 28 01:16:01.289805 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:16:01.289821 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:16:01.289837 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:16:01.289853 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:16:01.289869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:16:01.289885 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:16:01.289961 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:16:01.289986 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:16:01.290002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:16:01.290018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:16:01.290034 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:16:01.290050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:16:01.290071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:16:01.290087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:16:01.290102 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:16:01.290122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:16:01.290141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:16:01.290156 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:16:01.290173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:16:01.290192 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:16:01.290209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:16:01.290226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:16:01.290332 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:16:01.290620 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:16:01.290643 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:16:01.290660 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:16:01.290677 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:16:01.290693 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:16:01.290709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:16:01.290724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:16:01.290740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:16:01.290756 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:16:01.290772 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:16:01.290795 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:16:01.290811 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:16:01.290827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:01.290844 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:16:01.290860 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:16:01.290875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:16:01.290892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:16:01.290964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:16:01.290987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:16:01.291003 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:16:01.291019 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:16:01.291041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:16:01.291058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:16:01.291077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:16:01.291092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:16:01.291107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:16:01.291124 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 01:16:01.291147 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 01:16:01.291162 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:16:01.291177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:16:01.291196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:16:01.291215 kernel: loop: module loaded Jan 28 01:16:01.291312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:16:01.291570 systemd-journald[1176]: Collecting audit messages is disabled. Jan 28 01:16:01.291612 systemd-journald[1176]: Journal started Jan 28 01:16:01.291642 systemd-journald[1176]: Runtime Journal (/run/log/journal/0dbe037142e24d008b920bf4f5c768b5) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:16:01.338361 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:16:01.503985 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:01.553452 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:16:01.561972 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:16:01.563337 kernel: fuse: init (API version 7.39) Jan 28 01:16:01.572815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:16:01.581031 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:16:01.588437 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:16:01.596029 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:16:01.612758 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:16:01.620832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:16:01.629109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:16:01.639626 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:16:01.640648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:16:01.653139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:16:01.653825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:16:01.665613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:16:01.665992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:16:01.682047 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:16:01.682808 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:16:01.693646 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:16:01.694150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:16:01.711654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:16:01.726725 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:16:01.748956 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:16:01.813616 kernel: ACPI: bus type drm_connector registered Jan 28 01:16:01.817113 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:16:01.817616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:16:01.856583 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:16:01.886479 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:16:01.927419 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:16:01.942415 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:16:01.949825 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:16:01.978136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:16:01.994846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:16:02.012544 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:16:02.020552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:16:02.063702 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:16:02.064744 systemd-journald[1176]: Time spent on flushing to /var/log/journal/0dbe037142e24d008b920bf4f5c768b5 is 147.395ms for 980 entries. Jan 28 01:16:02.064744 systemd-journald[1176]: System Journal (/var/log/journal/0dbe037142e24d008b920bf4f5c768b5) is 8.0M, max 195.6M, 187.6M free. Jan 28 01:16:02.330773 systemd-journald[1176]: Received client request to flush runtime journal. Jan 28 01:16:02.121567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:16:02.220564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:16:02.253092 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:16:02.269143 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:16:02.293650 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:16:02.337558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:16:02.376781 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:16:02.423055 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:16:02.445129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:16:02.466753 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 01:16:02.560747 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 28 01:16:02.560769 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 28 01:16:02.586402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:16:02.632591 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:16:03.057798 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:16:03.141680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:16:03.264163 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 28 01:16:03.264469 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 28 01:16:03.339857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:16:04.759983 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:16:04.790745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:16:04.937796 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Jan 28 01:16:05.077437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:16:05.177683 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:16:05.311612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:16:05.481572 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 28 01:16:07.120983 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1255) Jan 28 01:16:08.144500 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:16:08.145332 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:16:08.156664 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 01:16:08.157140 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:16:08.158134 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:16:08.175047 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 01:16:08.281716 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:16:08.320015 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:16:08.300639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:16:08.372410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:16:08.473463 systemd-networkd[1254]: lo: Link UP Jan 28 01:16:08.476013 systemd-networkd[1254]: lo: Gained carrier Jan 28 01:16:08.481817 systemd-networkd[1254]: Enumeration completed Jan 28 01:16:08.482901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:16:08.499754 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:16:08.499850 systemd-networkd[1254]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:16:08.507791 systemd-networkd[1254]: eth0: Link UP Jan 28 01:16:08.507802 systemd-networkd[1254]: eth0: Gained carrier Jan 28 01:16:08.507824 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:16:08.545754 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:16:08.707432 systemd-networkd[1254]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:16:08.971859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:16:09.818597 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:16:10.074690 kernel: kvm_amd: TSC scaling supported Jan 28 01:16:10.074832 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:16:10.085076 kernel: kvm_amd: Nested Paging enabled Jan 28 01:16:10.085135 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:16:10.093747 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:16:10.554482 systemd-networkd[1254]: eth0: Gained IPv6LL Jan 28 01:16:10.589063 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:16:11.061493 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:16:11.151122 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:16:11.225718 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:16:11.332132 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:16:11.411033 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:16:11.440490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:16:11.486721 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:16:11.558399 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:16:11.657201 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:16:11.670757 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:16:11.681760 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:16:11.682049 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:16:11.694860 systemd[1]: Reached target machines.target - Containers. Jan 28 01:16:11.721400 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:16:11.761391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:16:11.798900 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:16:11.827149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:16:11.837458 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:16:11.858915 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:16:11.885520 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:16:11.890390 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:16:11.950384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:16:12.011456 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 01:16:12.047220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:16:12.049006 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:16:12.141786 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:16:12.227553 kernel: loop1: detected capacity change from 0 to 224512 Jan 28 01:16:12.622664 kernel: loop2: detected capacity change from 0 to 140768 Jan 28 01:16:12.910139 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 01:16:13.209920 kernel: loop4: detected capacity change from 0 to 224512 Jan 28 01:16:13.377817 kernel: loop5: detected capacity change from 0 to 140768 Jan 28 01:16:13.568352 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 01:16:13.569707 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 28 01:16:14.186936 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:16:14.189141 systemd[1]: Reloading... Jan 28 01:16:14.475090 zram_generator::config[1340]: No configuration found. Jan 28 01:16:15.226479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:16:15.437848 systemd[1]: Reloading finished in 1239 ms. Jan 28 01:16:15.553721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:16:15.632167 systemd[1]: Starting ensure-sysext.service... Jan 28 01:16:15.693943 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:16:15.728653 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:16:15.728816 systemd[1]: Reloading... Jan 28 01:16:15.813292 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:16:16.271685 zram_generator::config[1416]: No configuration found. Jan 28 01:16:16.453059 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:16:16.457188 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:16:16.463695 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:16:16.464915 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 28 01:16:16.466098 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 28 01:16:16.536881 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:16:16.536961 systemd-tmpfiles[1387]: Skipping /boot Jan 28 01:16:16.581166 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:16:16.586883 systemd-tmpfiles[1387]: Skipping /boot Jan 28 01:16:17.108392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:16:17.246752 systemd[1]: Reloading finished in 1515 ms. Jan 28 01:16:17.294810 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:16:17.322355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:16:17.480939 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:16:17.502862 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:16:17.547686 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:16:17.617353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:16:17.648896 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:16:17.677340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:17.677848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:16:17.682853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:16:17.691603 augenrules[1482]: No rules Jan 28 01:16:17.711712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:16:17.743841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:16:17.756795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:16:17.758018 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:17.768500 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:16:17.793733 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:16:17.831361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:16:17.831661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:16:17.858619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:16:17.859597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:16:17.906166 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:16:17.907203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:16:17.952338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:17.953206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:16:17.998763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:16:18.026917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:16:18.059163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:16:18.086024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:16:18.104615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:16:18.111682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:16:18.119031 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:16:18.124051 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:16:18.135874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:16:18.147046 systemd-resolved[1477]: Positive Trust Anchors: Jan 28 01:16:18.147136 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:16:18.147176 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:16:18.147535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:16:18.147894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:16:18.154642 systemd-resolved[1477]: Defaulting to hostname 'linux'. Jan 28 01:16:18.158409 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:16:18.158748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:16:18.166953 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:16:18.177123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:16:18.177620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:16:18.192954 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:16:18.195801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:16:18.212643 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:16:18.242135 systemd[1]: Finished ensure-sysext.service. Jan 28 01:16:18.267865 systemd[1]: Reached target network.target - Network. Jan 28 01:16:18.276998 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:16:18.303749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:16:18.336382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:16:18.336689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:16:18.384148 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:16:18.407343 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:16:18.696492 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:16:18.720889 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:16:18.736598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:16:18.771594 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:16:18.790863 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:16:18.825877 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:16:18.826055 systemd-timesyncd[1525]: Initial clock synchronization to Wed 2026-01-28 01:16:18.461266 UTC. Jan 28 01:16:18.830782 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:16:18.831179 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:16:18.844484 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:16:18.859915 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:16:18.882716 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:16:18.917972 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:16:18.935128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:16:18.968942 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:16:18.990569 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:16:19.013692 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:16:19.039198 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:16:19.050093 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:16:19.078807 systemd[1]: System is tainted: cgroupsv1 Jan 28 01:16:19.078923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:16:19.078960 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:16:19.092083 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:16:19.122665 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:16:19.159997 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:16:19.320179 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:16:19.387579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:16:19.391024 jq[1534]: false Jan 28 01:16:19.399844 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:16:19.404675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:16:19.449658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:16:19.477313 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:16:19.496369 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:16:19.510471 dbus-daemon[1532]: [system] SELinux support is enabled Jan 28 01:16:19.526590 extend-filesystems[1535]: Found loop3 Jan 28 01:16:19.526590 extend-filesystems[1535]: Found loop4 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found loop5 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found sr0 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda1 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda2 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda3 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found usr Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda4 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda6 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda7 Jan 28 01:16:19.561335 extend-filesystems[1535]: Found vda9 Jan 28 01:16:19.561335 extend-filesystems[1535]: Checking size of /dev/vda9 Jan 28 01:16:19.737035 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 01:16:19.539005 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:16:19.737437 extend-filesystems[1535]: Resized partition /dev/vda9 Jan 28 01:16:19.595649 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:16:19.760657 extend-filesystems[1558]: resize2fs 1.47.1 (20-May-2024) Jan 28 01:16:19.664941 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:16:19.756171 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:16:19.889414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1570) Jan 28 01:16:19.925743 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:16:19.995499 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:16:20.056422 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:16:20.103658 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 01:16:20.147408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:16:20.150919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:16:20.162372 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:16:20.162737 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:16:20.195124 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:16:20.195124 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:16:20.195124 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 01:16:20.312690 update_engine[1568]: I20260128 01:16:20.195995 1568 main.cc:92] Flatcar Update Engine starting Jan 28 01:16:20.312690 update_engine[1568]: I20260128 01:16:20.258698 1568 update_check_scheduler.cc:74] Next update check in 6m23s Jan 28 01:16:20.313119 jq[1577]: true Jan 28 01:16:20.313471 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Jan 28 01:16:20.316439 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:16:20.336862 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:16:20.344539 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:16:20.391921 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:16:20.392480 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:16:20.477346 jq[1587]: true Jan 28 01:16:20.514106 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:16:20.619859 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:16:20.622505 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:16:20.677992 tar[1586]: linux-amd64/LICENSE Jan 28 01:16:20.687578 tar[1586]: linux-amd64/helm Jan 28 01:16:20.688144 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:16:20.691075 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:16:20.737536 systemd-logind[1562]: New seat seat0. Jan 28 01:16:20.749491 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:16:20.777085 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:16:20.839297 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:16:20.844544 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:16:20.844792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:16:20.871877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:16:20.872079 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:16:20.889131 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:16:20.928754 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:16:21.084780 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:16:21.089396 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:16:21.161644 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:16:21.360310 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:16:21.511102 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:16:21.567164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:16:22.091160 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:16:22.196161 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:16:22.196699 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:16:22.253970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:16:22.439950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:16:22.486661 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:16:22.510678 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:16:22.544832 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:16:23.317322 containerd[1588]: time="2026-01-28T01:16:23.313003200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:16:23.659765 containerd[1588]: time="2026-01-28T01:16:23.659426370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.689923 containerd[1588]: time="2026-01-28T01:16:23.689718134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:16:23.692099 containerd[1588]: time="2026-01-28T01:16:23.690113856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:16:23.692099 containerd[1588]: time="2026-01-28T01:16:23.691568743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:16:23.692099 containerd[1588]: time="2026-01-28T01:16:23.692085931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:16:23.692099 containerd[1588]: time="2026-01-28T01:16:23.692111686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.693818 containerd[1588]: time="2026-01-28T01:16:23.693620539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:16:23.693897 containerd[1588]: time="2026-01-28T01:16:23.693879591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.695729 containerd[1588]: time="2026-01-28T01:16:23.694579789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:16:23.695812 containerd[1588]: time="2026-01-28T01:16:23.695788447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.696473304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.696495205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.696756628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.697333682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.697619551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.697641307Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.697765377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:16:23.699156 containerd[1588]: time="2026-01-28T01:16:23.697937621Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745180871Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745383696Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745409089Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745444537Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745556281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.745831531Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746678598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746892900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746916041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746934014Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746956345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746972834Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.746988574Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.749833 containerd[1588]: time="2026-01-28T01:16:23.747007199Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747025192Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747044832Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747064364Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747079821Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747329795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747354076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747373560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747399041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747415170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747499677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747515533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747530132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747546768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.752620 containerd[1588]: time="2026-01-28T01:16:23.747568047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747585853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747603036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747618297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747637839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747663359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.753070 containerd[1588]: time="2026-01-28T01:16:23.747685047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.755639 containerd[1588]: time="2026-01-28T01:16:23.753556837Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:16:23.758353 containerd[1588]: time="2026-01-28T01:16:23.758165574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:16:23.758353 containerd[1588]: time="2026-01-28T01:16:23.758203370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:16:23.758353 containerd[1588]: time="2026-01-28T01:16:23.758326262Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:16:23.758353 containerd[1588]: time="2026-01-28T01:16:23.758346535Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:16:23.758487 containerd[1588]: time="2026-01-28T01:16:23.758360021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.758487 containerd[1588]: time="2026-01-28T01:16:23.758417742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:16:23.758487 containerd[1588]: time="2026-01-28T01:16:23.758439361Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:16:23.758487 containerd[1588]: time="2026-01-28T01:16:23.758453970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:16:23.763036 containerd[1588]: time="2026-01-28T01:16:23.762047413Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:16:23.763036 containerd[1588]: time="2026-01-28T01:16:23.762563685Z" level=info msg="Connect containerd service" Jan 28 01:16:23.763036 containerd[1588]: time="2026-01-28T01:16:23.762875835Z" level=info msg="using legacy CRI server" Jan 28 01:16:23.763036 containerd[1588]: time="2026-01-28T01:16:23.762890317Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:16:23.871693 containerd[1588]: time="2026-01-28T01:16:23.763483343Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:16:23.871693 containerd[1588]: time="2026-01-28T01:16:23.770802550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:16:23.974163 containerd[1588]: time="2026-01-28T01:16:23.956348023Z" level=info msg="Start subscribing containerd event" Jan 28 01:16:23.974163 containerd[1588]: time="2026-01-28T01:16:23.966143067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:16:23.974921 containerd[1588]: time="2026-01-28T01:16:23.974697653Z" level=info msg="Start recovering state" Jan 28 01:16:23.980010 containerd[1588]: time="2026-01-28T01:16:23.978764578Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:16:23.981689 containerd[1588]: time="2026-01-28T01:16:23.980095942Z" level=info msg="Start event monitor" Jan 28 01:16:23.981689 containerd[1588]: time="2026-01-28T01:16:23.980133583Z" level=info msg="Start snapshots syncer" Jan 28 01:16:23.981689 containerd[1588]: time="2026-01-28T01:16:23.980362942Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:16:23.981689 containerd[1588]: time="2026-01-28T01:16:23.980378613Z" level=info msg="Start streaming server" Jan 28 01:16:23.981471 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:16:23.984630 containerd[1588]: time="2026-01-28T01:16:23.984068169Z" level=info msg="containerd successfully booted in 0.676880s" Jan 28 01:16:25.278978 tar[1586]: linux-amd64/README.md Jan 28 01:16:25.391416 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:16:28.576154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:16:28.608950 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:48306.service - OpenSSH per-connection server daemon (10.0.0.1:48306). Jan 28 01:16:29.312717 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 48306 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:29.348334 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:29.685380 systemd-logind[1562]: New session 1 of user core. Jan 28 01:16:29.690783 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:16:29.702497 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:16:29.744557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:16:29.745398 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:16:29.753009 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:16:30.088842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:16:30.147759 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:16:30.188388 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:16:30.771856 systemd[1688]: Queued start job for default target default.target. Jan 28 01:16:30.773182 systemd[1688]: Created slice app.slice - User Application Slice. Jan 28 01:16:30.804447 systemd[1688]: Reached target paths.target - Paths. Jan 28 01:16:30.804535 systemd[1688]: Reached target timers.target - Timers. Jan 28 01:16:30.830499 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:16:30.918096 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:16:30.918181 systemd[1688]: Reached target sockets.target - Sockets. Jan 28 01:16:30.918205 systemd[1688]: Reached target basic.target - Basic System. Jan 28 01:16:30.918381 systemd[1688]: Reached target default.target - Main User Target. Jan 28 01:16:30.918436 systemd[1688]: Startup finished in 672ms. Jan 28 01:16:30.919382 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:16:30.932944 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:16:30.939456 systemd[1]: Startup finished in 35.059s (kernel) + 35.610s (userspace) = 1min 10.670s. Jan 28 01:16:31.034499 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:48316.service - OpenSSH per-connection server daemon (10.0.0.1:48316). Jan 28 01:16:31.265526 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 48316 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:31.276596 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:31.372703 systemd-logind[1562]: New session 2 of user core. Jan 28 01:16:31.398692 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:16:31.550518 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:31.807008 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). Jan 28 01:16:31.809583 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:48316.service: Deactivated successfully. Jan 28 01:16:31.830873 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 01:16:31.833557 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Jan 28 01:16:31.838399 systemd-logind[1562]: Removed session 2. Jan 28 01:16:31.940551 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:31.943164 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:31.977082 systemd-logind[1562]: New session 3 of user core. Jan 28 01:16:31.998517 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:16:32.161632 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:32.183823 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Jan 28 01:16:32.184881 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:48324.service: Deactivated successfully. Jan 28 01:16:32.207995 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:16:32.213600 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:16:32.235203 systemd-logind[1562]: Removed session 3. Jan 28 01:16:32.446558 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:32.466694 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:32.503719 systemd-logind[1562]: New session 4 of user core. Jan 28 01:16:32.510157 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:16:32.665571 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:32.683038 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:48334.service: Deactivated successfully. Jan 28 01:16:32.707612 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:16:32.742941 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:16:32.786892 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:53040.service - OpenSSH per-connection server daemon (10.0.0.1:53040). Jan 28 01:16:32.800651 systemd-logind[1562]: Removed session 4. Jan 28 01:16:32.978536 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 53040 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:32.997117 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:33.026951 systemd-logind[1562]: New session 5 of user core. Jan 28 01:16:33.047819 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:16:33.277979 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:16:33.278812 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:16:33.340591 sudo[1734]: pam_unix(sudo:session): session closed for user root Jan 28 01:16:33.355061 sshd[1730]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:33.373690 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). Jan 28 01:16:33.377896 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:53040.service: Deactivated successfully. Jan 28 01:16:33.478512 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:16:33.486906 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:16:33.502374 systemd-logind[1562]: Removed session 5. Jan 28 01:16:33.593563 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:33.598168 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:33.618437 systemd-logind[1562]: New session 6 of user core. Jan 28 01:16:33.626110 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:16:33.847922 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:16:33.848924 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:16:33.878810 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 28 01:16:33.925572 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 01:16:33.926552 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:16:34.045634 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 01:16:34.148983 auditctl[1747]: No rules Jan 28 01:16:34.153076 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:16:34.153696 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 01:16:34.193928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:16:34.529930 kubelet[1684]: E0128 01:16:34.529053 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:16:34.531804 augenrules[1767]: No rules Jan 28 01:16:34.538931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:16:34.540059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:16:34.545439 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:16:34.548896 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 28 01:16:34.558747 sshd[1736]: pam_unix(sshd:session): session closed for user core Jan 28 01:16:34.579074 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). Jan 28 01:16:34.579919 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:53046.service: Deactivated successfully. Jan 28 01:16:34.589903 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:16:34.592889 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:16:34.598552 systemd-logind[1562]: Removed session 6. Jan 28 01:16:34.740399 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:16:34.749061 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:16:34.781868 systemd-logind[1562]: New session 7 of user core. Jan 28 01:16:34.810059 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:16:34.913766 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:16:34.918853 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:16:36.785006 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:16:36.796506 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:16:37.903057 dockerd[1801]: time="2026-01-28T01:16:37.902992320Z" level=info msg="Starting up" Jan 28 01:16:38.803846 dockerd[1801]: time="2026-01-28T01:16:38.800730162Z" level=info msg="Loading containers: start." Jan 28 01:16:40.393731 kernel: Initializing XFRM netlink socket Jan 28 01:16:41.759427 systemd-networkd[1254]: docker0: Link UP Jan 28 01:16:42.101830 dockerd[1801]: time="2026-01-28T01:16:42.071190098Z" level=info msg="Loading containers: done." Jan 28 01:16:42.613058 dockerd[1801]: time="2026-01-28T01:16:42.612663644Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:16:42.613058 dockerd[1801]: time="2026-01-28T01:16:42.612879462Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:16:42.613058 dockerd[1801]: time="2026-01-28T01:16:42.613115796Z" level=info msg="Daemon has completed initialization" Jan 28 01:16:44.702426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:16:44.777927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:16:44.802433 dockerd[1801]: time="2026-01-28T01:16:44.800412893Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:16:44.826384 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:16:48.825734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:16:49.051568 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:16:49.821040 kubelet[1958]: E0128 01:16:49.817687 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:16:49.832735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:16:49.833040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:16:51.442682 containerd[1588]: time="2026-01-28T01:16:51.426997107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:16:53.555964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408347227.mount: Deactivated successfully. Jan 28 01:17:00.079166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:17:00.124444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:17:04.053487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:17:04.098980 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:17:05.369202 update_engine[1568]: I20260128 01:17:05.327517 1568 update_attempter.cc:509] Updating boot flags... Jan 28 01:17:06.624097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2055) Jan 28 01:17:07.043977 kubelet[2038]: E0128 01:17:07.041615 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:17:07.066156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:17:07.067114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:17:08.196428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2057) Jan 28 01:17:12.652557 containerd[1588]: time="2026-01-28T01:17:12.651657222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:12.657612 containerd[1588]: time="2026-01-28T01:17:12.657415912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 01:17:12.670377 containerd[1588]: time="2026-01-28T01:17:12.665726743Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:12.681128 containerd[1588]: time="2026-01-28T01:17:12.680897296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:12.688449 containerd[1588]: time="2026-01-28T01:17:12.685489775Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 21.258361389s" Jan 28 01:17:12.688449 containerd[1588]: time="2026-01-28T01:17:12.685672151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:17:12.742912 containerd[1588]: time="2026-01-28T01:17:12.742421666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:17:17.167001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:17:17.221695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:17:20.240850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:17:20.294821 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:17:22.220764 kubelet[2079]: E0128 01:17:22.219854 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:17:22.231100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:17:22.236099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:17:26.397223 containerd[1588]: time="2026-01-28T01:17:26.396425396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:26.406141 containerd[1588]: time="2026-01-28T01:17:26.404633821Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 01:17:26.411766 containerd[1588]: time="2026-01-28T01:17:26.410743192Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:26.425557 containerd[1588]: time="2026-01-28T01:17:26.423761153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:26.446005 containerd[1588]: time="2026-01-28T01:17:26.445661488Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 13.702997562s" Jan 28 01:17:26.446005 containerd[1588]: time="2026-01-28T01:17:26.445784691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:17:26.461707 containerd[1588]: time="2026-01-28T01:17:26.460904774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:17:32.428326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:17:32.469074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:17:33.472157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:17:33.548764 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:17:34.458428 kubelet[2105]: E0128 01:17:34.457681 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:17:34.466088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:17:34.466677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:17:37.514072 containerd[1588]: time="2026-01-28T01:17:37.510320475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:37.514072 containerd[1588]: time="2026-01-28T01:17:37.513192358Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 01:17:37.584635 containerd[1588]: time="2026-01-28T01:17:37.575218642Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:37.680666 containerd[1588]: time="2026-01-28T01:17:37.680194303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:37.690013 containerd[1588]: time="2026-01-28T01:17:37.689868795Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 11.228877568s" Jan 28 01:17:37.690013 containerd[1588]: time="2026-01-28T01:17:37.689974434Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:17:37.812936 containerd[1588]: time="2026-01-28T01:17:37.811412976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:17:44.652753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:17:44.679827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:17:45.353595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981760862.mount: Deactivated successfully. Jan 28 01:17:46.830372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:17:46.874687 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:17:47.425376 kubelet[2135]: E0128 01:17:47.423561 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:17:47.450870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:17:47.451511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:17:51.569409 containerd[1588]: time="2026-01-28T01:17:51.568828521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:51.572891 containerd[1588]: time="2026-01-28T01:17:51.572756934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 01:17:51.578217 containerd[1588]: time="2026-01-28T01:17:51.577412893Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:51.587571 containerd[1588]: time="2026-01-28T01:17:51.587403009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:17:51.589463 containerd[1588]: time="2026-01-28T01:17:51.589348950Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 13.777726469s" Jan 28 01:17:51.589463 containerd[1588]: time="2026-01-28T01:17:51.589387412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:17:51.601306 containerd[1588]: time="2026-01-28T01:17:51.601196589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:17:52.789469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843135129.mount: Deactivated successfully. Jan 28 01:17:57.706665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:17:57.815923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:17:58.550746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:17:58.600708 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:17:59.148932 kubelet[2210]: E0128 01:17:59.142020 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:17:59.155721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:17:59.158789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:00.984482 containerd[1588]: time="2026-01-28T01:18:00.982878627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:00.997109 containerd[1588]: time="2026-01-28T01:18:00.997049244Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 01:18:01.006604 containerd[1588]: time="2026-01-28T01:18:01.004511431Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:01.028377 containerd[1588]: time="2026-01-28T01:18:01.025901024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:01.050302 containerd[1588]: time="2026-01-28T01:18:01.049652575Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 9.448004957s" Jan 28 01:18:01.050302 containerd[1588]: time="2026-01-28T01:18:01.049696895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:18:01.073616 containerd[1588]: time="2026-01-28T01:18:01.072879152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:18:02.476061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035230779.mount: Deactivated successfully. Jan 28 01:18:02.514771 containerd[1588]: time="2026-01-28T01:18:02.512595450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:02.527634 containerd[1588]: time="2026-01-28T01:18:02.527032731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 01:18:02.533709 containerd[1588]: time="2026-01-28T01:18:02.532859845Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:02.544736 containerd[1588]: time="2026-01-28T01:18:02.544063438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:02.549836 containerd[1588]: time="2026-01-28T01:18:02.549118035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.47618847s" Jan 28 01:18:02.549836 containerd[1588]: time="2026-01-28T01:18:02.549164571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:18:02.567710 containerd[1588]: time="2026-01-28T01:18:02.567113672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:18:03.708880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746063700.mount: Deactivated successfully. Jan 28 01:18:09.406620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:18:09.451477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:10.727880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:10.773833 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:11.445502 kubelet[2287]: E0128 01:18:11.444815 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:18:11.463337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:18:11.463845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:21.696077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:18:21.823181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:23.064798 containerd[1588]: time="2026-01-28T01:18:23.060462697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.074538 containerd[1588]: time="2026-01-28T01:18:23.074426712Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 01:18:23.099350 containerd[1588]: time="2026-01-28T01:18:23.098840865Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.143336 containerd[1588]: time="2026-01-28T01:18:23.136991054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:18:23.154810 containerd[1588]: time="2026-01-28T01:18:23.153423809Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 20.586255476s" Jan 28 01:18:23.154810 containerd[1588]: time="2026-01-28T01:18:23.153507343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:18:24.009131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:24.046000 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:18:24.677120 kubelet[2320]: E0128 01:18:24.676464 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:18:24.694808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:18:24.695179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:18:30.984150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:31.007904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:31.138317 systemd[1]: Reloading requested from client PID 2354 ('systemctl') (unit session-7.scope)... Jan 28 01:18:31.138336 systemd[1]: Reloading... Jan 28 01:18:31.367712 zram_generator::config[2396]: No configuration found. Jan 28 01:18:31.916072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:18:33.031150 systemd[1]: Reloading finished in 1892 ms. Jan 28 01:18:33.511769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:33.588463 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:18:33.609631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:33.613386 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:18:33.626638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:33.711030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:18:35.095567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:18:35.120877 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:18:35.782752 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:35.782752 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:18:35.782752 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:18:35.804637 kubelet[2460]: I0128 01:18:35.783322 2460 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:18:37.886139 kubelet[2460]: I0128 01:18:37.875472 2460 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:18:37.886139 kubelet[2460]: I0128 01:18:37.875575 2460 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:18:37.890996 kubelet[2460]: I0128 01:18:37.890959 2460 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:18:38.162415 kubelet[2460]: E0128 01:18:38.159845 2460 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:38.171343 kubelet[2460]: I0128 01:18:38.171139 2460 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:18:38.416590 kubelet[2460]: E0128 01:18:38.413547 2460 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:18:38.416590 kubelet[2460]: I0128 01:18:38.413825 2460 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:18:38.461341 kubelet[2460]: I0128 01:18:38.460464 2460 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:18:38.461804 kubelet[2460]: I0128 01:18:38.461751 2460 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:18:38.462369 kubelet[2460]: I0128 01:18:38.461920 2460 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:18:38.462867 kubelet[2460]: I0128 01:18:38.462848 2460 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:18:38.462959 kubelet[2460]: I0128 01:18:38.462947 2460 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:18:38.463575 kubelet[2460]: I0128 01:18:38.463556 2460 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:38.595986 kubelet[2460]: I0128 01:18:38.580006 2460 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:18:38.605838 kubelet[2460]: I0128 01:18:38.601406 2460 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:18:38.605838 kubelet[2460]: I0128 01:18:38.602095 2460 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:18:38.605838 kubelet[2460]: I0128 01:18:38.602209 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:18:38.615072 kubelet[2460]: W0128 01:18:38.613955 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:38.615072 kubelet[2460]: E0128 01:18:38.614198 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:38.615072 kubelet[2460]: W0128 01:18:38.614849 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:38.615072 kubelet[2460]: E0128 01:18:38.614904 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:38.670645 kubelet[2460]: I0128 01:18:38.666303 2460 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:18:38.670645 kubelet[2460]: I0128 01:18:38.667841 2460 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:18:38.670645 kubelet[2460]: W0128 01:18:38.668173 2460 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:18:38.696791 kubelet[2460]: I0128 01:18:38.695462 2460 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:18:38.698480 kubelet[2460]: I0128 01:18:38.697068 2460 server.go:1287] "Started kubelet" Jan 28 01:18:38.700183 kubelet[2460]: I0128 01:18:38.698016 2460 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:18:38.703402 kubelet[2460]: I0128 01:18:38.702356 2460 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:18:38.711746 kubelet[2460]: I0128 01:18:38.699326 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:18:38.711746 kubelet[2460]: I0128 01:18:38.711595 2460 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:18:38.718644 kubelet[2460]: I0128 01:18:38.718177 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:18:38.729203 kubelet[2460]: E0128 01:18:38.720774 2460 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec047e987be28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,LastTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:18:38.730937 kubelet[2460]: I0128 01:18:38.730823 2460 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:18:38.735468 kubelet[2460]: E0128 01:18:38.734583 2460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:18:38.735468 kubelet[2460]: E0128 01:18:38.734725 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Jan 28 01:18:38.735468 kubelet[2460]: W0128 01:18:38.735117 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:38.735468 kubelet[2460]: E0128 01:18:38.735175 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:38.736035 kubelet[2460]: I0128 01:18:38.736015 2460 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:18:38.740109 kubelet[2460]: I0128 01:18:38.739321 2460 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:18:38.749699 kubelet[2460]: I0128 01:18:38.740187 2460 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:18:38.749699 kubelet[2460]: I0128 01:18:38.746579 2460 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:18:38.749699 kubelet[2460]: I0128 01:18:38.746681 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:18:38.753952 kubelet[2460]: I0128 01:18:38.753864 2460 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:18:38.757360 kubelet[2460]: E0128 01:18:38.753873 2460 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:18:38.839172 kubelet[2460]: E0128 01:18:38.838862 2460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:18:38.851910 kubelet[2460]: I0128 01:18:38.851184 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:18:38.858170 kubelet[2460]: I0128 01:18:38.858132 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:18:38.861154 kubelet[2460]: I0128 01:18:38.858884 2460 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:18:38.861154 kubelet[2460]: I0128 01:18:38.859648 2460 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:18:38.861154 kubelet[2460]: I0128 01:18:38.859704 2460 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:18:38.861154 kubelet[2460]: E0128 01:18:38.859774 2460 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:18:38.862841 kubelet[2460]: W0128 01:18:38.862795 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:38.862964 kubelet[2460]: E0128 01:18:38.862939 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:38.921833 kubelet[2460]: I0128 01:18:38.920623 2460 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:18:38.927217 kubelet[2460]: I0128 01:18:38.923943 2460 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:18:38.927217 kubelet[2460]: I0128 01:18:38.924014 2460 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:18:38.942795 kubelet[2460]: E0128 01:18:38.942749 2460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:18:38.944776 kubelet[2460]: I0128 01:18:38.943429 2460 policy_none.go:49] "None policy: Start" Jan 28 01:18:38.944776 kubelet[2460]: I0128 01:18:38.943463 2460 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:18:38.944776 kubelet[2460]: I0128 01:18:38.943482 2460 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:18:38.944776 kubelet[2460]: E0128 01:18:38.944657 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Jan 28 01:18:38.961034 kubelet[2460]: E0128 01:18:38.960821 2460 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:18:38.974684 kubelet[2460]: I0128 01:18:38.973712 2460 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:18:38.974684 kubelet[2460]: I0128 01:18:38.974045 2460 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:18:38.974684 kubelet[2460]: I0128 01:18:38.974062 2460 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:18:38.982796 kubelet[2460]: I0128 01:18:38.981573 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:18:38.987927 kubelet[2460]: E0128 01:18:38.987896 2460 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:18:38.990199 kubelet[2460]: E0128 01:18:38.990135 2460 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:18:39.197998 kubelet[2460]: I0128 01:18:39.196991 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:39.205766 kubelet[2460]: E0128 01:18:39.200732 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:39.266679 kubelet[2460]: E0128 01:18:39.265723 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:39.266679 kubelet[2460]: E0128 01:18:39.265724 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:39.284540 kubelet[2460]: E0128 01:18:39.284458 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:39.346083 kubelet[2460]: E0128 01:18:39.345904 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Jan 28 01:18:39.348103 kubelet[2460]: I0128 01:18:39.347442 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:18:39.348103 kubelet[2460]: I0128 01:18:39.347593 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:18:39.348103 kubelet[2460]: I0128 01:18:39.347631 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:18:39.348103 kubelet[2460]: I0128 01:18:39.347664 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:18:39.348103 kubelet[2460]: I0128 01:18:39.347695 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:18:39.348885 kubelet[2460]: I0128 01:18:39.347718 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:18:39.348885 kubelet[2460]: I0128 01:18:39.347753 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:18:39.348885 kubelet[2460]: I0128 01:18:39.347783 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:18:39.348885 kubelet[2460]: I0128 01:18:39.347807 2460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:18:39.406704 kubelet[2460]: I0128 01:18:39.406214 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:39.406941 kubelet[2460]: E0128 01:18:39.406853 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:39.578094 kubelet[2460]: W0128 01:18:39.566946 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:39.578094 kubelet[2460]: E0128 01:18:39.567892 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:39.578094 kubelet[2460]: E0128 01:18:39.568667 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:39.578094 kubelet[2460]: E0128 01:18:39.569727 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:39.585756 kubelet[2460]: E0128 01:18:39.585725 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:39.600821 containerd[1588]: time="2026-01-28T01:18:39.600026970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:39.618408 containerd[1588]: time="2026-01-28T01:18:39.616658321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c65f8f8b7f58bf58f90afd1ea6340cd,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:39.620764 containerd[1588]: time="2026-01-28T01:18:39.620418698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:18:39.808383 kubelet[2460]: W0128 01:18:39.804170 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:39.808383 kubelet[2460]: E0128 01:18:39.806785 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:39.821634 kubelet[2460]: I0128 01:18:39.821412 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:39.824144 kubelet[2460]: E0128 01:18:39.824016 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:39.945309 kubelet[2460]: W0128 01:18:39.846436 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:39.945309 kubelet[2460]: E0128 01:18:39.846641 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:40.205358 kubelet[2460]: E0128 01:18:40.203850 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="1.6s" Jan 28 01:18:40.372875 kubelet[2460]: W0128 01:18:40.371829 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:40.372875 kubelet[2460]: E0128 01:18:40.372187 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:40.387211 kubelet[2460]: E0128 01:18:40.375434 2460 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:40.665842 kubelet[2460]: I0128 01:18:40.664224 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:40.669359 kubelet[2460]: E0128 01:18:40.668784 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:41.460448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559268684.mount: Deactivated successfully. Jan 28 01:18:41.505912 containerd[1588]: time="2026-01-28T01:18:41.502443851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:41.534773 containerd[1588]: time="2026-01-28T01:18:41.534165356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 01:18:41.543885 containerd[1588]: time="2026-01-28T01:18:41.541194263Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:41.551723 containerd[1588]: time="2026-01-28T01:18:41.548371487Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:41.559640 containerd[1588]: time="2026-01-28T01:18:41.559402183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:41.606571 containerd[1588]: time="2026-01-28T01:18:41.600753755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:18:41.662118 containerd[1588]: time="2026-01-28T01:18:41.660191848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:18:41.691750 containerd[1588]: time="2026-01-28T01:18:41.688773779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:18:41.701718 containerd[1588]: time="2026-01-28T01:18:41.701169769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.083712201s" Jan 28 01:18:41.713681 containerd[1588]: time="2026-01-28T01:18:41.713624427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.092974548s" Jan 28 01:18:41.717383 containerd[1588]: time="2026-01-28T01:18:41.716994732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.116283789s" Jan 28 01:18:41.810843 kubelet[2460]: E0128 01:18:41.810686 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="3.2s" Jan 28 01:18:41.832022 kubelet[2460]: W0128 01:18:41.831909 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:41.832022 kubelet[2460]: E0128 01:18:41.831973 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:42.232062 kubelet[2460]: W0128 01:18:42.231898 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:42.232062 kubelet[2460]: E0128 01:18:42.232027 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:42.279069 kubelet[2460]: I0128 01:18:42.277955 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:42.279069 kubelet[2460]: E0128 01:18:42.278868 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:42.784678 kubelet[2460]: W0128 01:18:42.729939 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:42.784678 kubelet[2460]: E0128 01:18:42.730173 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:44.656362 kubelet[2460]: W0128 01:18:44.629043 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:44.669009 kubelet[2460]: E0128 01:18:44.668387 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:44.676141 kubelet[2460]: E0128 01:18:44.675413 2460 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.709684870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.709849406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.709875074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.710966407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.709124015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.717818142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.717849490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:44.723042 containerd[1588]: time="2026-01-28T01:18:44.718090649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:44.816420 containerd[1588]: time="2026-01-28T01:18:44.813194335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:18:44.816420 containerd[1588]: time="2026-01-28T01:18:44.813754479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:18:44.816420 containerd[1588]: time="2026-01-28T01:18:44.813904238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:44.816420 containerd[1588]: time="2026-01-28T01:18:44.814335212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:18:45.159808 kubelet[2460]: E0128 01:18:45.159392 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="6.4s" Jan 28 01:18:45.291151 systemd[1]: run-containerd-runc-k8s.io-2780857d3404839fbd96b42c5468a2a0fe78d4a55ec6c5617f8c24168d977224-runc.QI8Gra.mount: Deactivated successfully. Jan 28 01:18:46.023417 kubelet[2460]: W0128 01:18:46.019046 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:46.062195 kubelet[2460]: E0128 01:18:46.024812 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:46.481759 kubelet[2460]: E0128 01:18:46.477089 2460 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec047e987be28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,LastTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:18:46.483034 kubelet[2460]: I0128 01:18:46.480340 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:46.493498 kubelet[2460]: E0128 01:18:46.491850 2460 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jan 28 01:18:46.525138 kubelet[2460]: W0128 01:18:46.490407 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:46.596955 kubelet[2460]: E0128 01:18:46.589701 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:46.799822 containerd[1588]: time="2026-01-28T01:18:46.799665419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"34cfd18852f093d8bb9bdfef2aa1948ca099a1b833913c79e7575f3833d449c8\"" Jan 28 01:18:46.824995 kubelet[2460]: E0128 01:18:46.824960 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:46.829743 containerd[1588]: time="2026-01-28T01:18:46.827516597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2780857d3404839fbd96b42c5468a2a0fe78d4a55ec6c5617f8c24168d977224\"" Jan 28 01:18:46.849371 kubelet[2460]: E0128 01:18:46.847921 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:47.706023 containerd[1588]: time="2026-01-28T01:18:47.703892037Z" level=info msg="CreateContainer within sandbox \"34cfd18852f093d8bb9bdfef2aa1948ca099a1b833913c79e7575f3833d449c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:18:47.706023 containerd[1588]: time="2026-01-28T01:18:47.705196783Z" level=info msg="CreateContainer within sandbox \"2780857d3404839fbd96b42c5468a2a0fe78d4a55ec6c5617f8c24168d977224\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:18:47.725216 containerd[1588]: time="2026-01-28T01:18:47.725167145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c65f8f8b7f58bf58f90afd1ea6340cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5640d1b894688b5db63d51b9021cede353f26373e863b6f780448354a05088e\"" Jan 28 01:18:47.727809 kubelet[2460]: E0128 01:18:47.727772 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:47.813364 containerd[1588]: time="2026-01-28T01:18:47.812644868Z" level=info msg="CreateContainer within sandbox \"e5640d1b894688b5db63d51b9021cede353f26373e863b6f780448354a05088e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:18:47.882024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322071930.mount: Deactivated successfully. Jan 28 01:18:47.935765 containerd[1588]: time="2026-01-28T01:18:47.932623676Z" level=info msg="CreateContainer within sandbox \"34cfd18852f093d8bb9bdfef2aa1948ca099a1b833913c79e7575f3833d449c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49b2d627de86cdf89e831c16195b4009d871e228c1f8e5fdb9f8a49bfa762e82\"" Jan 28 01:18:47.947401 containerd[1588]: time="2026-01-28T01:18:47.944659037Z" level=info msg="StartContainer for \"49b2d627de86cdf89e831c16195b4009d871e228c1f8e5fdb9f8a49bfa762e82\"" Jan 28 01:18:47.981534 containerd[1588]: time="2026-01-28T01:18:47.980766323Z" level=info msg="CreateContainer within sandbox \"2780857d3404839fbd96b42c5468a2a0fe78d4a55ec6c5617f8c24168d977224\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c14cf4eb002a30940720d308dcb34e7211b246c9dc145d2bf3b260bf483af3c\"" Jan 28 01:18:47.985181 containerd[1588]: time="2026-01-28T01:18:47.982628937Z" level=info msg="StartContainer for \"9c14cf4eb002a30940720d308dcb34e7211b246c9dc145d2bf3b260bf483af3c\"" Jan 28 01:18:48.015762 containerd[1588]: time="2026-01-28T01:18:48.012717970Z" level=info msg="CreateContainer within sandbox \"e5640d1b894688b5db63d51b9021cede353f26373e863b6f780448354a05088e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f9532e70e68026809b53fe916cf51e94a50d8b33ff9db275e29d2de932f4451\"" Jan 28 01:18:48.015762 containerd[1588]: time="2026-01-28T01:18:48.014057858Z" level=info msg="StartContainer for \"6f9532e70e68026809b53fe916cf51e94a50d8b33ff9db275e29d2de932f4451\"" Jan 28 01:18:48.978813 kubelet[2460]: W0128 01:18:48.978345 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:48.978813 kubelet[2460]: E0128 01:18:48.978505 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:49.001716 kubelet[2460]: E0128 01:18:49.001163 2460 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:18:50.014934 containerd[1588]: time="2026-01-28T01:18:50.012418635Z" level=info msg="StartContainer for \"6f9532e70e68026809b53fe916cf51e94a50d8b33ff9db275e29d2de932f4451\" returns successfully" Jan 28 01:18:50.110423 containerd[1588]: time="2026-01-28T01:18:50.109923575Z" level=info msg="StartContainer for \"49b2d627de86cdf89e831c16195b4009d871e228c1f8e5fdb9f8a49bfa762e82\" returns successfully" Jan 28 01:18:50.281702 kubelet[2460]: E0128 01:18:50.267503 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:50.281702 kubelet[2460]: E0128 01:18:50.267945 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:50.286993 kubelet[2460]: E0128 01:18:50.285645 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:50.286993 kubelet[2460]: E0128 01:18:50.285895 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:50.345315 containerd[1588]: time="2026-01-28T01:18:50.338759612Z" level=info msg="StartContainer for \"9c14cf4eb002a30940720d308dcb34e7211b246c9dc145d2bf3b260bf483af3c\" returns successfully" Jan 28 01:18:50.501733 kubelet[2460]: W0128 01:18:50.496658 2460 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jan 28 01:18:50.501733 kubelet[2460]: E0128 01:18:50.497122 2460 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:18:51.326750 kubelet[2460]: E0128 01:18:51.323707 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:51.362850 kubelet[2460]: E0128 01:18:51.355037 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:51.450154 kubelet[2460]: E0128 01:18:51.448199 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:51.450154 kubelet[2460]: E0128 01:18:51.445088 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:51.450154 kubelet[2460]: E0128 01:18:51.449678 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:51.450154 kubelet[2460]: E0128 01:18:51.449826 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:52.380851 kubelet[2460]: E0128 01:18:52.378013 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:52.380851 kubelet[2460]: E0128 01:18:52.379519 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:52.384092 kubelet[2460]: E0128 01:18:52.382906 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:52.384092 kubelet[2460]: E0128 01:18:52.383165 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:52.966936 kubelet[2460]: I0128 01:18:52.966608 2460 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:18:53.449448 kubelet[2460]: E0128 01:18:53.448353 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:53.449448 kubelet[2460]: E0128 01:18:53.448676 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:57.127573 kubelet[2460]: E0128 01:18:57.126005 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:57.127573 kubelet[2460]: E0128 01:18:57.126401 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:58.089659 kubelet[2460]: E0128 01:18:58.088988 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:58.090962 kubelet[2460]: E0128 01:18:58.090930 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:58.802917 kubelet[2460]: E0128 01:18:58.801771 2460 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:18:58.802917 kubelet[2460]: E0128 01:18:58.802040 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:18:59.002465 kubelet[2460]: E0128 01:18:59.002158 2460 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:19:01.593562 kubelet[2460]: E0128 01:19:01.583423 2460 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 01:19:02.729047 kubelet[2460]: I0128 01:19:02.724694 2460 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:19:02.729047 kubelet[2460]: E0128 01:19:02.725001 2460 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:19:02.744688 kubelet[2460]: I0128 01:19:02.743592 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:02.787988 kubelet[2460]: E0128 01:19:02.787696 2460 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec047e987be28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,LastTimestamp:2026-01-28 01:18:38.696381992 +0000 UTC m=+3.485802795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:19:03.047775 kubelet[2460]: E0128 01:19:03.045492 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:03.052791 kubelet[2460]: I0128 01:19:03.050659 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:03.053584 kubelet[2460]: E0128 01:19:03.047704 2460 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec047ead4aeed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:18:38.718201581 +0000 UTC m=+3.507622415,LastTimestamp:2026-01-28 01:18:38.718201581 +0000 UTC m=+3.507622415,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:19:03.060352 kubelet[2460]: E0128 01:19:03.058226 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:03.060352 kubelet[2460]: I0128 01:19:03.058408 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:19:03.061896 kubelet[2460]: E0128 01:19:03.061874 2460 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 01:19:03.287876 kubelet[2460]: I0128 01:19:03.287042 2460 apiserver.go:52] "Watching apiserver" Jan 28 01:19:03.353773 kubelet[2460]: I0128 01:19:03.343217 2460 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:19:07.990813 kubelet[2460]: E0128 01:19:07.990033 2460 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.734s" Jan 28 01:19:09.221398 kubelet[2460]: I0128 01:19:09.217041 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:19:09.221398 kubelet[2460]: I0128 01:19:09.218218 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:09.406955 kubelet[2460]: E0128 01:19:09.402412 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:09.422428 kubelet[2460]: E0128 01:19:09.410371 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:10.193480 kubelet[2460]: E0128 01:19:10.193182 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:10.204138 kubelet[2460]: E0128 01:19:10.195163 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:15.279803 systemd[1]: Reloading requested from client PID 2749 ('systemctl') (unit session-7.scope)... Jan 28 01:19:15.280988 systemd[1]: Reloading... Jan 28 01:19:15.926209 zram_generator::config[2786]: No configuration found. Jan 28 01:19:16.717093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:19:17.428015 systemd[1]: Reloading finished in 2145 ms. Jan 28 01:19:17.607352 kubelet[2460]: I0128 01:19:17.601204 2460 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:17.741458 kubelet[2460]: E0128 01:19:17.725380 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:17.854895 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:17.926565 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:19:17.927423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:17.959600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:19:19.167425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:19:19.213422 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:19:20.819631 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:19:20.819631 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:19:20.819631 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:19:20.819631 kubelet[2843]: I0128 01:19:20.817954 2843 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:19:21.068357 kubelet[2843]: I0128 01:19:21.047463 2843 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:19:21.068357 kubelet[2843]: I0128 01:19:21.047525 2843 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:19:21.068357 kubelet[2843]: I0128 01:19:21.048455 2843 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:19:21.082640 kubelet[2843]: I0128 01:19:21.076836 2843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:19:21.099351 kubelet[2843]: I0128 01:19:21.096366 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:19:21.128057 kubelet[2843]: E0128 01:19:21.127936 2843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:19:21.128057 kubelet[2843]: I0128 01:19:21.128044 2843 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:19:21.156596 kubelet[2843]: I0128 01:19:21.154119 2843 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:19:21.156596 kubelet[2843]: I0128 01:19:21.155374 2843 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:19:21.156596 kubelet[2843]: I0128 01:19:21.155428 2843 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 01:19:21.156596 kubelet[2843]: I0128 01:19:21.155754 2843 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.155776 2843 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.155852 2843 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.156056 2843 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.156081 2843 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.156109 2843 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:19:21.157433 kubelet[2843]: I0128 01:19:21.156123 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:19:21.181433 kubelet[2843]: I0128 01:19:21.178493 2843 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:19:21.181433 kubelet[2843]: I0128 01:19:21.180016 2843 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:19:21.181433 kubelet[2843]: I0128 01:19:21.181034 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:19:21.181433 kubelet[2843]: I0128 01:19:21.181072 2843 server.go:1287] "Started kubelet" Jan 28 01:19:21.381906 kubelet[2843]: I0128 01:19:21.374939 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:19:21.381906 kubelet[2843]: I0128 01:19:21.379092 2843 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:19:21.412398 kubelet[2843]: I0128 01:19:21.385384 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:19:21.412398 kubelet[2843]: I0128 01:19:21.391871 2843 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:19:21.412398 kubelet[2843]: I0128 01:19:21.392807 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:19:21.412398 kubelet[2843]: I0128 01:19:21.399944 2843 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:19:21.412398 kubelet[2843]: I0128 01:19:21.402387 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:19:21.412398 kubelet[2843]: E0128 01:19:21.402621 2843 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:19:21.444648 kubelet[2843]: I0128 01:19:21.426446 2843 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:19:21.444648 kubelet[2843]: I0128 01:19:21.426833 2843 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:19:21.444648 kubelet[2843]: I0128 01:19:21.444584 2843 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:19:21.724992 kubelet[2843]: I0128 01:19:21.718383 2843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:19:21.976966 sudo[2863]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 28 01:19:21.978115 sudo[2863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 28 01:19:22.122493 kubelet[2843]: I0128 01:19:22.113859 2843 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:19:22.168782 kubelet[2843]: E0128 01:19:22.153621 2843 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:19:22.176206 kubelet[2843]: I0128 01:19:22.172481 2843 apiserver.go:52] "Watching apiserver" Jan 28 01:19:22.244434 kubelet[2843]: I0128 01:19:22.241664 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:19:22.245609 kubelet[2843]: I0128 01:19:22.245586 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:19:22.245953 kubelet[2843]: I0128 01:19:22.245937 2843 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:19:22.246598 kubelet[2843]: I0128 01:19:22.246439 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:19:22.264676 kubelet[2843]: I0128 01:19:22.264585 2843 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:19:22.274453 kubelet[2843]: E0128 01:19:22.273013 2843 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:19:22.379749 kubelet[2843]: E0128 01:19:22.379458 2843 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:19:22.580817 kubelet[2843]: E0128 01:19:22.580498 2843 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659124 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659151 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659181 2843 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659488 2843 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659508 2843 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659538 2843 policy_none.go:49] "None policy: Start" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659555 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:19:22.662117 kubelet[2843]: I0128 01:19:22.659572 2843 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:19:22.669675 kubelet[2843]: I0128 01:19:22.662956 2843 state_mem.go:75] "Updated machine memory state" Jan 28 01:19:22.677064 kubelet[2843]: I0128 01:19:22.675123 2843 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:19:22.685612 kubelet[2843]: I0128 01:19:22.683752 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:19:22.685612 kubelet[2843]: I0128 01:19:22.683788 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:19:22.685612 kubelet[2843]: I0128 01:19:22.684820 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:19:22.726945 kubelet[2843]: I0128 01:19:22.700401 2843 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:19:22.726945 kubelet[2843]: E0128 01:19:22.700657 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:19:22.726945 kubelet[2843]: I0128 01:19:22.705061 2843 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:19:22.738441 containerd[1588]: time="2026-01-28T01:19:22.703227389Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:19:22.967736 kubelet[2843]: I0128 01:19:22.967646 2843 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:19:23.056538 kubelet[2843]: I0128 01:19:23.055981 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:23.072976 kubelet[2843]: I0128 01:19:23.056188 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:23.073488 kubelet[2843]: I0128 01:19:23.073183 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c65f8f8b7f58bf58f90afd1ea6340cd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c65f8f8b7f58bf58f90afd1ea6340cd\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:19:23.074547 kubelet[2843]: I0128 01:19:23.074376 2843 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:19:23.102198 kubelet[2843]: I0128 01:19:23.099599 2843 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:19:23.102198 kubelet[2843]: I0128 01:19:23.099851 2843 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:19:23.178018 kubelet[2843]: I0128 01:19:23.176457 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:23.178018 kubelet[2843]: I0128 01:19:23.176518 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:23.178018 kubelet[2843]: I0128 01:19:23.176578 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:23.178018 kubelet[2843]: I0128 01:19:23.176603 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:19:23.178018 kubelet[2843]: I0128 01:19:23.176626 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:23.181540 kubelet[2843]: I0128 01:19:23.176648 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:19:23.266181 kubelet[2843]: I0128 01:19:23.255609 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=14.255557783 podStartE2EDuration="14.255557783s" podCreationTimestamp="2026-01-28 01:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:23.216369319 +0000 UTC m=+3.938943405" watchObservedRunningTime="2026-01-28 01:19:23.255557783 +0000 UTC m=+3.978131859" Jan 28 01:19:23.289819 kubelet[2843]: E0128 01:19:23.289579 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:23.295500 kubelet[2843]: E0128 01:19:23.295467 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:23.310022 kubelet[2843]: E0128 01:19:23.309987 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:23.327782 kubelet[2843]: I0128 01:19:23.327652 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.327632735 podStartE2EDuration="6.327632735s" podCreationTimestamp="2026-01-28 01:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:23.327381715 +0000 UTC m=+4.049955791" watchObservedRunningTime="2026-01-28 01:19:23.327632735 +0000 UTC m=+4.050206810" Jan 28 01:19:23.571342 kubelet[2843]: E0128 01:19:23.569389 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:23.573050 kubelet[2843]: E0128 01:19:23.570656 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:23.697199 kubelet[2843]: I0128 01:19:23.686076 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=14.686055632 podStartE2EDuration="14.686055632s" podCreationTimestamp="2026-01-28 01:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:23.600442841 +0000 UTC m=+4.323016917" watchObservedRunningTime="2026-01-28 01:19:23.686055632 +0000 UTC m=+4.408629708" Jan 28 01:19:23.852211 kubelet[2843]: I0128 01:19:23.852094 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e979328c-ecda-45a7-ad11-9bd3e128fa5b-xtables-lock\") pod \"kube-proxy-5jphr\" (UID: \"e979328c-ecda-45a7-ad11-9bd3e128fa5b\") " pod="kube-system/kube-proxy-5jphr" Jan 28 01:19:23.852949 kubelet[2843]: I0128 01:19:23.852594 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e979328c-ecda-45a7-ad11-9bd3e128fa5b-kube-proxy\") pod \"kube-proxy-5jphr\" (UID: \"e979328c-ecda-45a7-ad11-9bd3e128fa5b\") " pod="kube-system/kube-proxy-5jphr" Jan 28 01:19:23.854527 kubelet[2843]: I0128 01:19:23.852774 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e979328c-ecda-45a7-ad11-9bd3e128fa5b-lib-modules\") pod \"kube-proxy-5jphr\" (UID: \"e979328c-ecda-45a7-ad11-9bd3e128fa5b\") " pod="kube-system/kube-proxy-5jphr" Jan 28 01:19:23.854527 kubelet[2843]: I0128 01:19:23.854462 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6f27\" (UniqueName: \"kubernetes.io/projected/e979328c-ecda-45a7-ad11-9bd3e128fa5b-kube-api-access-z6f27\") pod \"kube-proxy-5jphr\" (UID: \"e979328c-ecda-45a7-ad11-9bd3e128fa5b\") " pod="kube-system/kube-proxy-5jphr" Jan 28 01:19:24.433457 kubelet[2843]: E0128 01:19:24.423477 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:24.622566 containerd[1588]: time="2026-01-28T01:19:24.613593428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jphr,Uid:e979328c-ecda-45a7-ad11-9bd3e128fa5b,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:24.651540 kubelet[2843]: E0128 01:19:24.651512 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:24.904091 containerd[1588]: time="2026-01-28T01:19:24.900009710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:19:24.904091 containerd[1588]: time="2026-01-28T01:19:24.901380334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:19:24.904091 containerd[1588]: time="2026-01-28T01:19:24.901405080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:24.904091 containerd[1588]: time="2026-01-28T01:19:24.901681587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:25.378485 containerd[1588]: time="2026-01-28T01:19:25.374129889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jphr,Uid:e979328c-ecda-45a7-ad11-9bd3e128fa5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"899c562e05b721acd2b7aedba419e77347543eca4941a9738daa7fc965397d8a\"" Jan 28 01:19:25.378762 kubelet[2843]: E0128 01:19:25.375862 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:25.403847 containerd[1588]: time="2026-01-28T01:19:25.402384339Z" level=info msg="CreateContainer within sandbox \"899c562e05b721acd2b7aedba419e77347543eca4941a9738daa7fc965397d8a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:19:25.664588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385445684.mount: Deactivated successfully. Jan 28 01:19:25.682506 kubelet[2843]: E0128 01:19:25.682225 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:25.758004 containerd[1588]: time="2026-01-28T01:19:25.757533609Z" level=info msg="CreateContainer within sandbox \"899c562e05b721acd2b7aedba419e77347543eca4941a9738daa7fc965397d8a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9832663d73a9991ef148db5bb3e81a3e6f89c7adf6f16b9d93311bb68bcd43e3\"" Jan 28 01:19:25.816038 containerd[1588]: time="2026-01-28T01:19:25.815995532Z" level=info msg="StartContainer for \"9832663d73a9991ef148db5bb3e81a3e6f89c7adf6f16b9d93311bb68bcd43e3\"" Jan 28 01:19:26.396119 sudo[2863]: pam_unix(sudo:session): session closed for user root Jan 28 01:19:26.409857 containerd[1588]: time="2026-01-28T01:19:26.404397096Z" level=info msg="StartContainer for \"9832663d73a9991ef148db5bb3e81a3e6f89c7adf6f16b9d93311bb68bcd43e3\" returns successfully" Jan 28 01:19:26.761649 kubelet[2843]: E0128 01:19:26.756629 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:26.764436 kubelet[2843]: E0128 01:19:26.763020 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:27.760696 kubelet[2843]: E0128 01:19:27.760334 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:29.194151 kubelet[2843]: I0128 01:19:29.191913 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jphr" podStartSLOduration=8.191884285 podStartE2EDuration="8.191884285s" podCreationTimestamp="2026-01-28 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:19:27.002386372 +0000 UTC m=+7.724960478" watchObservedRunningTime="2026-01-28 01:19:29.191884285 +0000 UTC m=+9.914458381" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200085 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-xtables-lock\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200446 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-cgroup\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200483 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-lib-modules\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200513 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-net\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200536 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-run\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.208295 kubelet[2843]: I0128 01:19:29.200564 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cni-path\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210349 kubelet[2843]: I0128 01:19:29.200593 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-kernel\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210349 kubelet[2843]: I0128 01:19:29.200617 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-bpf-maps\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210349 kubelet[2843]: I0128 01:19:29.200642 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ca98c6-7a43-4faf-b3a4-959c7403a471-clustermesh-secrets\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210349 kubelet[2843]: I0128 01:19:29.200665 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsh7h\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-kube-api-access-wsh7h\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210349 kubelet[2843]: I0128 01:19:29.200878 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-etc-cni-netd\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210578 kubelet[2843]: I0128 01:19:29.201051 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-config-path\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210578 kubelet[2843]: I0128 01:19:29.201087 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-hubble-tls\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.210578 kubelet[2843]: I0128 01:19:29.201114 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-hostproc\") pod \"cilium-74rkn\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " pod="kube-system/cilium-74rkn" Jan 28 01:19:29.318424 kubelet[2843]: I0128 01:19:29.306176 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxlmf\" (UniqueName: \"kubernetes.io/projected/e781658d-a892-42dd-85d0-ba5cd2e8e187-kube-api-access-mxlmf\") pod \"cilium-operator-6c4d7847fc-dh96j\" (UID: \"e781658d-a892-42dd-85d0-ba5cd2e8e187\") " pod="kube-system/cilium-operator-6c4d7847fc-dh96j" Jan 28 01:19:29.318424 kubelet[2843]: I0128 01:19:29.306387 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e781658d-a892-42dd-85d0-ba5cd2e8e187-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dh96j\" (UID: \"e781658d-a892-42dd-85d0-ba5cd2e8e187\") " pod="kube-system/cilium-operator-6c4d7847fc-dh96j" Jan 28 01:19:29.537990 kubelet[2843]: E0128 01:19:29.528415 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:29.544455 containerd[1588]: time="2026-01-28T01:19:29.543170230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74rkn,Uid:31ca98c6-7a43-4faf-b3a4-959c7403a471,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:29.701082 containerd[1588]: time="2026-01-28T01:19:29.699153888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:19:29.701082 containerd[1588]: time="2026-01-28T01:19:29.699477184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:19:29.709786 containerd[1588]: time="2026-01-28T01:19:29.699716081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:29.709972 containerd[1588]: time="2026-01-28T01:19:29.709896383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:29.852198 kubelet[2843]: E0128 01:19:29.851127 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:29.858663 containerd[1588]: time="2026-01-28T01:19:29.855474415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dh96j,Uid:e781658d-a892-42dd-85d0-ba5cd2e8e187,Namespace:kube-system,Attempt:0,}" Jan 28 01:19:29.974842 containerd[1588]: time="2026-01-28T01:19:29.973716724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:19:29.974842 containerd[1588]: time="2026-01-28T01:19:29.973941205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:19:29.974842 containerd[1588]: time="2026-01-28T01:19:29.973959979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:29.978332 containerd[1588]: time="2026-01-28T01:19:29.975612668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74rkn,Uid:31ca98c6-7a43-4faf-b3a4-959c7403a471,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\"" Jan 28 01:19:29.978528 kubelet[2843]: E0128 01:19:29.977170 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:29.980626 containerd[1588]: time="2026-01-28T01:19:29.976604988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:19:29.984202 containerd[1588]: time="2026-01-28T01:19:29.983793969Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 28 01:19:30.129217 containerd[1588]: time="2026-01-28T01:19:30.128374051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dh96j,Uid:e781658d-a892-42dd-85d0-ba5cd2e8e187,Namespace:kube-system,Attempt:0,} returns sandbox id \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\"" Jan 28 01:19:30.131339 kubelet[2843]: E0128 01:19:30.130225 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:32.068100 kubelet[2843]: E0128 01:19:32.067131 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:32.486701 kubelet[2843]: E0128 01:19:32.484942 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:34.346174 kubelet[2843]: E0128 01:19:34.337645 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:35.722712 kubelet[2843]: E0128 01:19:35.722046 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:36.701559 systemd-journald[1176]: Under memory pressure, flushing caches. Jan 28 01:19:36.694953 systemd-resolved[1477]: Under memory pressure, flushing caches. Jan 28 01:19:36.695057 systemd-resolved[1477]: Flushed all caches. Jan 28 01:19:42.501423 kubelet[2843]: E0128 01:19:42.501336 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:19:49.010752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883115530.mount: Deactivated successfully. Jan 28 01:20:02.164473 containerd[1588]: time="2026-01-28T01:20:02.164368292Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:02.174935 containerd[1588]: time="2026-01-28T01:20:02.174789959Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 28 01:20:02.180517 containerd[1588]: time="2026-01-28T01:20:02.180411285Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:02.190985 containerd[1588]: time="2026-01-28T01:20:02.190885847Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 32.207013022s" Jan 28 01:20:02.190985 containerd[1588]: time="2026-01-28T01:20:02.190937955Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 28 01:20:02.202223 containerd[1588]: time="2026-01-28T01:20:02.201876828Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 28 01:20:02.204934 containerd[1588]: time="2026-01-28T01:20:02.203590727Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 01:20:02.340836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573686111.mount: Deactivated successfully. Jan 28 01:20:02.426097 containerd[1588]: time="2026-01-28T01:20:02.425807419Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\"" Jan 28 01:20:02.431332 containerd[1588]: time="2026-01-28T01:20:02.428621545Z" level=info msg="StartContainer for \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\"" Jan 28 01:20:02.661194 containerd[1588]: time="2026-01-28T01:20:02.660942776Z" level=info msg="StartContainer for \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\" returns successfully" Jan 28 01:20:03.101832 containerd[1588]: time="2026-01-28T01:20:03.101475010Z" level=info msg="shim disconnected" id=e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57 namespace=k8s.io Jan 28 01:20:03.101832 containerd[1588]: time="2026-01-28T01:20:03.101695813Z" level=warning msg="cleaning up after shim disconnected" id=e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57 namespace=k8s.io Jan 28 01:20:03.101832 containerd[1588]: time="2026-01-28T01:20:03.101710451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:20:03.274772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57-rootfs.mount: Deactivated successfully. Jan 28 01:20:03.345318 kubelet[2843]: E0128 01:20:03.344890 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:03.348315 containerd[1588]: time="2026-01-28T01:20:03.347891866Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 01:20:03.430573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766886622.mount: Deactivated successfully. Jan 28 01:20:03.462911 containerd[1588]: time="2026-01-28T01:20:03.462410638Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\"" Jan 28 01:20:03.463012 containerd[1588]: time="2026-01-28T01:20:03.462983349Z" level=info msg="StartContainer for \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\"" Jan 28 01:20:03.656143 containerd[1588]: time="2026-01-28T01:20:03.651131110Z" level=info msg="StartContainer for \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\" returns successfully" Jan 28 01:20:03.701567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:20:03.701981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:20:03.702124 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:20:03.719683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:20:03.789888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:20:03.855309 containerd[1588]: time="2026-01-28T01:20:03.852906159Z" level=info msg="shim disconnected" id=a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830 namespace=k8s.io Jan 28 01:20:03.855309 containerd[1588]: time="2026-01-28T01:20:03.852993633Z" level=warning msg="cleaning up after shim disconnected" id=a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830 namespace=k8s.io Jan 28 01:20:03.855309 containerd[1588]: time="2026-01-28T01:20:03.853011185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:20:03.901180 containerd[1588]: time="2026-01-28T01:20:03.900556899Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:20:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:20:04.317210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830-rootfs.mount: Deactivated successfully. Jan 28 01:20:04.361392 kubelet[2843]: E0128 01:20:04.360330 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:04.365433 containerd[1588]: time="2026-01-28T01:20:04.365322958Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:20:05.448862 containerd[1588]: time="2026-01-28T01:20:05.445673752Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\"" Jan 28 01:20:05.487712 containerd[1588]: time="2026-01-28T01:20:05.484686026Z" level=info msg="StartContainer for \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\"" Jan 28 01:20:05.674100 systemd[1]: run-containerd-runc-k8s.io-e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149-runc.MCYc2H.mount: Deactivated successfully. Jan 28 01:20:05.850546 containerd[1588]: time="2026-01-28T01:20:05.837018370Z" level=info msg="StartContainer for \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\" returns successfully" Jan 28 01:20:06.122772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149-rootfs.mount: Deactivated successfully. Jan 28 01:20:06.167662 containerd[1588]: time="2026-01-28T01:20:06.167531505Z" level=info msg="shim disconnected" id=e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149 namespace=k8s.io Jan 28 01:20:06.167662 containerd[1588]: time="2026-01-28T01:20:06.167598698Z" level=warning msg="cleaning up after shim disconnected" id=e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149 namespace=k8s.io Jan 28 01:20:06.167662 containerd[1588]: time="2026-01-28T01:20:06.167612014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:20:06.431425 kubelet[2843]: E0128 01:20:06.430984 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:06.470201 containerd[1588]: time="2026-01-28T01:20:06.464730179Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:20:06.559557 containerd[1588]: time="2026-01-28T01:20:06.559505843Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\"" Jan 28 01:20:06.562501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121666347.mount: Deactivated successfully. Jan 28 01:20:06.573286 containerd[1588]: time="2026-01-28T01:20:06.568225570Z" level=info msg="StartContainer for \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\"" Jan 28 01:20:06.849703 containerd[1588]: time="2026-01-28T01:20:06.832079158Z" level=info msg="StartContainer for \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\" returns successfully" Jan 28 01:20:06.931214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a-rootfs.mount: Deactivated successfully. Jan 28 01:20:07.009985 containerd[1588]: time="2026-01-28T01:20:07.008755002Z" level=info msg="shim disconnected" id=218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a namespace=k8s.io Jan 28 01:20:07.009985 containerd[1588]: time="2026-01-28T01:20:07.008822658Z" level=warning msg="cleaning up after shim disconnected" id=218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a namespace=k8s.io Jan 28 01:20:07.009985 containerd[1588]: time="2026-01-28T01:20:07.008835532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:20:07.482186 kubelet[2843]: E0128 01:20:07.480193 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:07.493579 containerd[1588]: time="2026-01-28T01:20:07.491102662Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:20:07.704630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892782892.mount: Deactivated successfully. Jan 28 01:20:07.758794 containerd[1588]: time="2026-01-28T01:20:07.755980187Z" level=info msg="CreateContainer within sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\"" Jan 28 01:20:07.780776 containerd[1588]: time="2026-01-28T01:20:07.764503040Z" level=info msg="StartContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\"" Jan 28 01:20:08.072805 containerd[1588]: time="2026-01-28T01:20:08.070913781Z" level=info msg="StartContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" returns successfully" Jan 28 01:20:08.497941 kubelet[2843]: E0128 01:20:08.497839 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:08.525650 kubelet[2843]: I0128 01:20:08.525524 2843 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:20:08.632171 kubelet[2843]: I0128 01:20:08.624220 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-74rkn" podStartSLOduration=7.408702717 podStartE2EDuration="39.624162274s" podCreationTimestamp="2026-01-28 01:19:29 +0000 UTC" firstStartedPulling="2026-01-28 01:19:29.981040612 +0000 UTC m=+10.703614689" lastFinishedPulling="2026-01-28 01:20:02.19650014 +0000 UTC m=+42.919074246" observedRunningTime="2026-01-28 01:20:08.616424763 +0000 UTC m=+49.338998850" watchObservedRunningTime="2026-01-28 01:20:08.624162274 +0000 UTC m=+49.346736360" Jan 28 01:20:08.684978 containerd[1588]: time="2026-01-28T01:20:08.682605317Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:08.688852 containerd[1588]: time="2026-01-28T01:20:08.688409823Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 28 01:20:08.704373 containerd[1588]: time="2026-01-28T01:20:08.703345592Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:20:08.708198 containerd[1588]: time="2026-01-28T01:20:08.706573632Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.504639898s" Jan 28 01:20:08.708198 containerd[1588]: time="2026-01-28T01:20:08.706625147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 28 01:20:08.736332 kubelet[2843]: I0128 01:20:08.733362 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnzcp\" (UniqueName: \"kubernetes.io/projected/64dbfbfd-537c-4d33-b050-241e71737951-kube-api-access-jnzcp\") pod \"coredns-668d6bf9bc-q86d9\" (UID: \"64dbfbfd-537c-4d33-b050-241e71737951\") " pod="kube-system/coredns-668d6bf9bc-q86d9" Jan 28 01:20:08.737873 kubelet[2843]: I0128 01:20:08.737431 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a20d75c-c2d1-44ab-9ab3-72c258e5ca84-config-volume\") pod \"coredns-668d6bf9bc-lz2bs\" (UID: \"0a20d75c-c2d1-44ab-9ab3-72c258e5ca84\") " pod="kube-system/coredns-668d6bf9bc-lz2bs" Jan 28 01:20:08.737873 kubelet[2843]: I0128 01:20:08.737494 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64dbfbfd-537c-4d33-b050-241e71737951-config-volume\") pod \"coredns-668d6bf9bc-q86d9\" (UID: \"64dbfbfd-537c-4d33-b050-241e71737951\") " pod="kube-system/coredns-668d6bf9bc-q86d9" Jan 28 01:20:08.737873 kubelet[2843]: I0128 01:20:08.737523 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28zk\" (UniqueName: \"kubernetes.io/projected/0a20d75c-c2d1-44ab-9ab3-72c258e5ca84-kube-api-access-r28zk\") pod \"coredns-668d6bf9bc-lz2bs\" (UID: \"0a20d75c-c2d1-44ab-9ab3-72c258e5ca84\") " pod="kube-system/coredns-668d6bf9bc-lz2bs" Jan 28 01:20:08.818615 containerd[1588]: time="2026-01-28T01:20:08.800819024Z" level=info msg="CreateContainer within sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 28 01:20:09.409551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093631914.mount: Deactivated successfully. Jan 28 01:20:09.524613 kubelet[2843]: E0128 01:20:09.521804 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:09.528202 containerd[1588]: time="2026-01-28T01:20:09.524067674Z" level=info msg="CreateContainer within sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\"" Jan 28 01:20:09.528202 containerd[1588]: time="2026-01-28T01:20:09.525205947Z" level=info msg="StartContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\"" Jan 28 01:20:09.601072 kubelet[2843]: E0128 01:20:09.595625 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:09.676936 kubelet[2843]: E0128 01:20:09.595182 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:10.131697 containerd[1588]: time="2026-01-28T01:20:10.083211289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q86d9,Uid:64dbfbfd-537c-4d33-b050-241e71737951,Namespace:kube-system,Attempt:0,}" Jan 28 01:20:13.787574 containerd[1588]: time="2026-01-28T01:20:13.783559216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz2bs,Uid:0a20d75c-c2d1-44ab-9ab3-72c258e5ca84,Namespace:kube-system,Attempt:0,}" Jan 28 01:20:13.892377 kubelet[2843]: E0128 01:20:13.888770 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.516s" Jan 28 01:20:13.983156 kubelet[2843]: E0128 01:20:13.971122 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:14.479383 containerd[1588]: time="2026-01-28T01:20:14.477872050Z" level=info msg="StartContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" returns successfully" Jan 28 01:20:15.044884 kubelet[2843]: E0128 01:20:15.044374 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:16.049286 kubelet[2843]: E0128 01:20:16.048809 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:17.270107 kubelet[2843]: E0128 01:20:17.269403 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:19.836997 systemd-networkd[1254]: cilium_host: Link UP Jan 28 01:20:19.843749 systemd-networkd[1254]: cilium_net: Link UP Jan 28 01:20:19.852794 systemd-networkd[1254]: cilium_net: Gained carrier Jan 28 01:20:19.853651 systemd-networkd[1254]: cilium_host: Gained carrier Jan 28 01:20:19.854028 systemd-networkd[1254]: cilium_net: Gained IPv6LL Jan 28 01:20:19.854840 systemd-networkd[1254]: cilium_host: Gained IPv6LL Jan 28 01:20:20.749435 systemd-networkd[1254]: cilium_vxlan: Link UP Jan 28 01:20:20.749477 systemd-networkd[1254]: cilium_vxlan: Gained carrier Jan 28 01:20:21.621635 kernel: NET: Registered PF_ALG protocol family Jan 28 01:20:21.813560 systemd-networkd[1254]: cilium_vxlan: Gained IPv6LL Jan 28 01:20:25.051653 systemd-networkd[1254]: lxc_health: Link UP Jan 28 01:20:25.114388 systemd-networkd[1254]: lxc_health: Gained carrier Jan 28 01:20:25.564162 kubelet[2843]: E0128 01:20:25.557599 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:25.871360 kubelet[2843]: I0128 01:20:25.864038 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dh96j" podStartSLOduration=18.275994621 podStartE2EDuration="56.864012324s" podCreationTimestamp="2026-01-28 01:19:29 +0000 UTC" firstStartedPulling="2026-01-28 01:19:30.136582854 +0000 UTC m=+10.859156931" lastFinishedPulling="2026-01-28 01:20:08.724600538 +0000 UTC m=+49.447174634" observedRunningTime="2026-01-28 01:20:15.109791085 +0000 UTC m=+55.832365160" watchObservedRunningTime="2026-01-28 01:20:25.864012324 +0000 UTC m=+66.586586400" Jan 28 01:20:26.090584 systemd-networkd[1254]: lxce52e1d098e6a: Link UP Jan 28 01:20:26.111386 kernel: eth0: renamed from tmpf74e7 Jan 28 01:20:26.161476 systemd-networkd[1254]: lxce52e1d098e6a: Gained carrier Jan 28 01:20:26.212292 systemd-networkd[1254]: lxcf29be895570f: Link UP Jan 28 01:20:26.224447 kubelet[2843]: E0128 01:20:26.213824 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:26.275136 kernel: eth0: renamed from tmp68ac6 Jan 28 01:20:26.352426 systemd-networkd[1254]: lxcf29be895570f: Gained carrier Jan 28 01:20:26.499395 systemd-networkd[1254]: lxc_health: Gained IPv6LL Jan 28 01:20:26.522064 kubelet[2843]: E0128 01:20:26.521904 2843 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34850->127.0.0.1:45565: write tcp 127.0.0.1:34850->127.0.0.1:45565: write: connection reset by peer Jan 28 01:20:27.280075 kubelet[2843]: E0128 01:20:27.277756 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:27.453693 systemd-networkd[1254]: lxce52e1d098e6a: Gained IPv6LL Jan 28 01:20:27.832621 systemd-networkd[1254]: lxcf29be895570f: Gained IPv6LL Jan 28 01:20:35.471136 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 28 01:20:35.482579 sshd[1775]: pam_unix(sshd:session): session closed for user core Jan 28 01:20:35.504843 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:53052.service: Deactivated successfully. Jan 28 01:20:35.532207 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:20:35.533664 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:20:35.557427 systemd-logind[1562]: Removed session 7. Jan 28 01:20:36.292313 kubelet[2843]: E0128 01:20:36.286815 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:36.292313 kubelet[2843]: E0128 01:20:36.287828 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:39.615338 containerd[1588]: time="2026-01-28T01:20:39.614749752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:20:39.615338 containerd[1588]: time="2026-01-28T01:20:39.614830132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:20:39.615338 containerd[1588]: time="2026-01-28T01:20:39.614850049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:20:39.629393 containerd[1588]: time="2026-01-28T01:20:39.629136737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:20:39.681677 containerd[1588]: time="2026-01-28T01:20:39.681446497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:20:39.686171 containerd[1588]: time="2026-01-28T01:20:39.681710348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:20:39.686171 containerd[1588]: time="2026-01-28T01:20:39.681748869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:20:39.690679 containerd[1588]: time="2026-01-28T01:20:39.689655109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:20:39.761986 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:20:39.848346 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:20:39.962186 containerd[1588]: time="2026-01-28T01:20:39.928346764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lz2bs,Uid:0a20d75c-c2d1-44ab-9ab3-72c258e5ca84,Namespace:kube-system,Attempt:0,} returns sandbox id \"68ac6f6f336a9090bb4d3ad4e327fd483f080995bf31de29fbd242163b7f85f8\"" Jan 28 01:20:39.990087 kubelet[2843]: E0128 01:20:39.954444 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:39.990887 containerd[1588]: time="2026-01-28T01:20:39.989604290Z" level=info msg="CreateContainer within sandbox \"68ac6f6f336a9090bb4d3ad4e327fd483f080995bf31de29fbd242163b7f85f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:20:40.104354 containerd[1588]: time="2026-01-28T01:20:40.101556624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q86d9,Uid:64dbfbfd-537c-4d33-b050-241e71737951,Namespace:kube-system,Attempt:0,} returns sandbox id \"f74e75d59f38276a48d08f79fe17c1bf32c0fd4e07fa005f690272c3d035dbb2\"" Jan 28 01:20:40.107853 kubelet[2843]: E0128 01:20:40.106129 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:40.112161 containerd[1588]: time="2026-01-28T01:20:40.111981122Z" level=info msg="CreateContainer within sandbox \"f74e75d59f38276a48d08f79fe17c1bf32c0fd4e07fa005f690272c3d035dbb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:20:40.186076 containerd[1588]: time="2026-01-28T01:20:40.185009116Z" level=info msg="CreateContainer within sandbox \"68ac6f6f336a9090bb4d3ad4e327fd483f080995bf31de29fbd242163b7f85f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"817634927acb6ab8007335ece9bc7d588b6c0254dfabe9b0a6f9dc8b80e7ad25\"" Jan 28 01:20:40.190827 containerd[1588]: time="2026-01-28T01:20:40.190794203Z" level=info msg="StartContainer for \"817634927acb6ab8007335ece9bc7d588b6c0254dfabe9b0a6f9dc8b80e7ad25\"" Jan 28 01:20:40.228543 containerd[1588]: time="2026-01-28T01:20:40.227731775Z" level=info msg="CreateContainer within sandbox \"f74e75d59f38276a48d08f79fe17c1bf32c0fd4e07fa005f690272c3d035dbb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04bd5daee78a952b7d8e52b670b65d36962b8d82220b9a77f7e7099099e19fe5\"" Jan 28 01:20:40.242455 containerd[1588]: time="2026-01-28T01:20:40.233132109Z" level=info msg="StartContainer for \"04bd5daee78a952b7d8e52b670b65d36962b8d82220b9a77f7e7099099e19fe5\"" Jan 28 01:20:40.706420 containerd[1588]: time="2026-01-28T01:20:40.705431244Z" level=info msg="StartContainer for \"04bd5daee78a952b7d8e52b670b65d36962b8d82220b9a77f7e7099099e19fe5\" returns successfully" Jan 28 01:20:40.714531 containerd[1588]: time="2026-01-28T01:20:40.712867500Z" level=info msg="StartContainer for \"817634927acb6ab8007335ece9bc7d588b6c0254dfabe9b0a6f9dc8b80e7ad25\" returns successfully" Jan 28 01:20:40.972837 kubelet[2843]: E0128 01:20:40.970964 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:40.992385 kubelet[2843]: E0128 01:20:40.987852 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:41.132616 kubelet[2843]: I0128 01:20:41.124888 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q86d9" podStartSLOduration=80.124865311 podStartE2EDuration="1m20.124865311s" podCreationTimestamp="2026-01-28 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:20:41.123887042 +0000 UTC m=+81.846461148" watchObservedRunningTime="2026-01-28 01:20:41.124865311 +0000 UTC m=+81.847439386" Jan 28 01:20:41.238219 kubelet[2843]: I0128 01:20:41.232842 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lz2bs" podStartSLOduration=80.232819935 podStartE2EDuration="1m20.232819935s" podCreationTimestamp="2026-01-28 01:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:20:41.223559029 +0000 UTC m=+81.946133126" watchObservedRunningTime="2026-01-28 01:20:41.232819935 +0000 UTC m=+81.955394011" Jan 28 01:20:42.048029 kubelet[2843]: E0128 01:20:42.041877 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:42.049364 kubelet[2843]: E0128 01:20:42.049339 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:43.049043 kubelet[2843]: E0128 01:20:43.046406 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:43.071043 kubelet[2843]: E0128 01:20:43.066738 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:20:55.581752 kubelet[2843]: E0128 01:20:55.579898 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.516s" Jan 28 01:21:01.209510 kubelet[2843]: E0128 01:21:01.207542 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:29.281217 kubelet[2843]: E0128 01:21:29.274982 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:38.279972 kubelet[2843]: E0128 01:21:38.279158 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:38.298187 kubelet[2843]: E0128 01:21:38.297989 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:44.282020 kubelet[2843]: E0128 01:21:44.281068 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:51.423342 kubelet[2843]: E0128 01:21:51.414932 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.047s" Jan 28 01:21:56.923765 kubelet[2843]: E0128 01:21:56.918098 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.596s" Jan 28 01:21:56.932954 kubelet[2843]: E0128 01:21:56.932608 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:21:56.935795 kubelet[2843]: E0128 01:21:56.935755 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:11.277503 kubelet[2843]: E0128 01:22:11.273935 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:18.286800 kubelet[2843]: E0128 01:22:18.286760 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:43.293600 update_engine[1568]: I20260128 01:22:43.282213 1568 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:22:43.293600 update_engine[1568]: I20260128 01:22:43.288589 1568 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:22:43.306668 update_engine[1568]: I20260128 01:22:43.297335 1568 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:22:43.306668 update_engine[1568]: I20260128 01:22:43.303205 1568 omaha_request_params.cc:62] Current group set to lts Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.320503 1568 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.320600 1568 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.320634 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.320915 1568 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.321087 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:22:43.321922 update_engine[1568]: I20260128 01:22:43.321105 1568 omaha_request_action.cc:272] Request: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.321922 update_engine[1568]: Jan 28 01:22:43.329350 update_engine[1568]: I20260128 01:22:43.326161 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:22:43.355427 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:22:43.370711 update_engine[1568]: I20260128 01:22:43.370667 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:22:43.375962 update_engine[1568]: I20260128 01:22:43.374567 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:22:43.394197 update_engine[1568]: E20260128 01:22:43.393990 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:22:43.394197 update_engine[1568]: I20260128 01:22:43.394170 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:22:51.277675 kubelet[2843]: E0128 01:22:51.274846 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:53.274550 update_engine[1568]: I20260128 01:22:53.272716 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:22:53.274550 update_engine[1568]: I20260128 01:22:53.273429 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:22:53.274550 update_engine[1568]: I20260128 01:22:53.273849 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:22:53.281186 kubelet[2843]: E0128 01:22:53.280352 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:22:53.318621 update_engine[1568]: E20260128 01:22:53.317694 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:22:53.318621 update_engine[1568]: I20260128 01:22:53.317857 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:22:56.317339 kubelet[2843]: E0128 01:22:56.311994 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:03.284814 update_engine[1568]: I20260128 01:23:03.280560 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:23:03.284814 update_engine[1568]: I20260128 01:23:03.284009 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:23:03.284814 update_engine[1568]: I20260128 01:23:03.284438 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:23:03.319485 update_engine[1568]: E20260128 01:23:03.318559 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:23:03.319485 update_engine[1568]: I20260128 01:23:03.318871 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:23:05.276315 kubelet[2843]: E0128 01:23:05.275441 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:08.291887 kubelet[2843]: E0128 01:23:08.278374 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:13.286546 update_engine[1568]: I20260128 01:23:13.282857 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:23:13.300590 update_engine[1568]: I20260128 01:23:13.283641 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:23:13.300590 update_engine[1568]: I20260128 01:23:13.293893 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:23:13.333397 update_engine[1568]: E20260128 01:23:13.325527 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:23:13.333397 update_engine[1568]: I20260128 01:23:13.331625 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:23:13.333397 update_engine[1568]: I20260128 01:23:13.331764 1568 omaha_request_action.cc:617] Omaha request response: Jan 28 01:23:13.333397 update_engine[1568]: E20260128 01:23:13.331900 1568 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:23:13.352105 update_engine[1568]: I20260128 01:23:13.352041 1568 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:23:13.353805 update_engine[1568]: I20260128 01:23:13.353772 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:23:13.353912 update_engine[1568]: I20260128 01:23:13.353886 1568 update_attempter.cc:306] Processing Done. Jan 28 01:23:13.354097 update_engine[1568]: E20260128 01:23:13.354072 1568 update_attempter.cc:619] Update failed. Jan 28 01:23:13.354172 update_engine[1568]: I20260128 01:23:13.354151 1568 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:23:13.354346 update_engine[1568]: I20260128 01:23:13.354220 1568 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:23:13.354434 update_engine[1568]: I20260128 01:23:13.354411 1568 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:23:13.354604 update_engine[1568]: I20260128 01:23:13.354581 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:23:13.356827 update_engine[1568]: I20260128 01:23:13.356792 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:23:13.357202 update_engine[1568]: I20260128 01:23:13.357169 1568 omaha_request_action.cc:272] Request: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357202 update_engine[1568]: Jan 28 01:23:13.357581 update_engine[1568]: I20260128 01:23:13.357551 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:23:13.363762 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:23:13.368005 update_engine[1568]: I20260128 01:23:13.367472 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:23:13.368005 update_engine[1568]: I20260128 01:23:13.367868 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:23:13.385905 update_engine[1568]: E20260128 01:23:13.385777 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:23:13.386136 update_engine[1568]: I20260128 01:23:13.385937 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:23:13.386136 update_engine[1568]: I20260128 01:23:13.385954 1568 omaha_request_action.cc:617] Omaha request response: Jan 28 01:23:13.386136 update_engine[1568]: I20260128 01:23:13.385968 1568 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:23:13.386136 update_engine[1568]: I20260128 01:23:13.385978 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:23:13.386136 update_engine[1568]: I20260128 01:23:13.385988 1568 update_attempter.cc:306] Processing Done. Jan 28 01:23:13.388643 update_engine[1568]: I20260128 01:23:13.385999 1568 update_attempter.cc:310] Error event sent. Jan 28 01:23:13.389820 update_engine[1568]: I20260128 01:23:13.388633 1568 update_check_scheduler.cc:74] Next update check in 46m49s Jan 28 01:23:13.391820 locksmithd[1624]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:23:26.279685 kubelet[2843]: E0128 01:23:26.278035 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:28.284899 kubelet[2843]: E0128 01:23:28.281659 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:31.280009 kubelet[2843]: E0128 01:23:31.274713 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:23:33.911642 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:49612.service - OpenSSH per-connection server daemon (10.0.0.1:49612). Jan 28 01:23:34.039797 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 49612 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:23:34.043626 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:34.069605 systemd-logind[1562]: New session 8 of user core. Jan 28 01:23:34.082367 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:23:34.759101 sshd[4450]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:34.766196 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:49612.service: Deactivated successfully. Jan 28 01:23:34.780963 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:23:34.781672 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:23:34.798773 systemd-logind[1562]: Removed session 8. Jan 28 01:23:39.805388 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:49626.service - OpenSSH per-connection server daemon (10.0.0.1:49626). Jan 28 01:23:40.000521 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 49626 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:23:40.014728 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:40.050601 systemd-logind[1562]: New session 9 of user core. Jan 28 01:23:40.061375 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:23:40.607609 sshd[4468]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:40.619782 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:49626.service: Deactivated successfully. Jan 28 01:23:40.639594 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:23:40.640173 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:23:40.654192 systemd-logind[1562]: Removed session 9. Jan 28 01:23:45.651043 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:60716.service - OpenSSH per-connection server daemon (10.0.0.1:60716). Jan 28 01:23:45.805730 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 60716 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:23:45.823864 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:45.876961 systemd-logind[1562]: New session 10 of user core. Jan 28 01:23:45.944678 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:23:46.708628 sshd[4485]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:46.725158 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:60716.service: Deactivated successfully. Jan 28 01:23:46.740354 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:23:46.747148 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:23:46.749422 systemd-logind[1562]: Removed session 10. Jan 28 01:23:51.747780 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:60722.service - OpenSSH per-connection server daemon (10.0.0.1:60722). Jan 28 01:23:51.876865 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 60722 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:23:51.883056 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:51.934475 systemd-logind[1562]: New session 11 of user core. Jan 28 01:23:51.950723 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:23:52.420730 sshd[4502]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:52.444579 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:60722.service: Deactivated successfully. Jan 28 01:23:52.457895 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:23:52.461332 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:23:52.463530 systemd-logind[1562]: Removed session 11. Jan 28 01:23:57.455461 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:42094.service - OpenSSH per-connection server daemon (10.0.0.1:42094). Jan 28 01:23:57.584102 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 42094 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:23:57.599632 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:57.623533 systemd-logind[1562]: New session 12 of user core. Jan 28 01:23:57.636938 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:23:58.130808 sshd[4521]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:58.152188 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:42094.service: Deactivated successfully. Jan 28 01:23:58.162975 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:23:58.180474 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:23:58.188925 systemd-logind[1562]: Removed session 12. Jan 28 01:24:03.152643 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:33604.service - OpenSSH per-connection server daemon (10.0.0.1:33604). Jan 28 01:24:03.330164 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 33604 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:03.339925 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:03.388431 systemd-logind[1562]: New session 13 of user core. Jan 28 01:24:03.418928 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:24:03.789503 sshd[4537]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:03.796472 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:33604.service: Deactivated successfully. Jan 28 01:24:03.810630 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:24:03.819092 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:24:03.821375 systemd-logind[1562]: Removed session 13. Jan 28 01:24:08.825962 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:33620.service - OpenSSH per-connection server daemon (10.0.0.1:33620). Jan 28 01:24:08.915897 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 33620 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:08.920965 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:08.952327 systemd-logind[1562]: New session 14 of user core. Jan 28 01:24:08.971047 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:24:09.569816 sshd[4553]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:09.594665 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:33620.service: Deactivated successfully. Jan 28 01:24:09.604415 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:24:09.605548 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:24:09.614669 systemd-logind[1562]: Removed session 14. Jan 28 01:24:14.693921 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). Jan 28 01:24:15.753905 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:15.793060 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:15.829321 systemd-logind[1562]: New session 15 of user core. Jan 28 01:24:15.871458 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:24:16.276686 kubelet[2843]: E0128 01:24:16.276637 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:16.287526 kubelet[2843]: E0128 01:24:16.277822 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:16.813913 sshd[4569]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:16.850702 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:59180.service: Deactivated successfully. Jan 28 01:24:16.887541 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:24:16.889530 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:24:16.904430 systemd-logind[1562]: Removed session 15. Jan 28 01:24:19.277408 kubelet[2843]: E0128 01:24:19.275796 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:21.835326 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:59186.service - OpenSSH per-connection server daemon (10.0.0.1:59186). Jan 28 01:24:22.100052 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 59186 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:22.110451 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:22.142586 systemd-logind[1562]: New session 16 of user core. Jan 28 01:24:22.159418 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:24:22.610066 sshd[4586]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:22.627741 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:59186.service: Deactivated successfully. Jan 28 01:24:22.662969 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:24:22.666073 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:24:22.682453 systemd-logind[1562]: Removed session 16. Jan 28 01:24:23.277469 kubelet[2843]: E0128 01:24:23.275701 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:24.318206 kubelet[2843]: E0128 01:24:24.305119 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:27.282686 kubelet[2843]: E0128 01:24:27.275603 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:27.678800 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:44530.service - OpenSSH per-connection server daemon (10.0.0.1:44530). Jan 28 01:24:27.905850 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 44530 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:27.923391 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:27.975185 systemd-logind[1562]: New session 17 of user core. Jan 28 01:24:27.999973 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:24:28.612616 sshd[4607]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:28.624063 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:44530.service: Deactivated successfully. Jan 28 01:24:28.645972 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:24:28.662692 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:24:28.672854 systemd-logind[1562]: Removed session 17. Jan 28 01:24:33.280722 kubelet[2843]: E0128 01:24:33.275126 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:33.767572 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:47276.service - OpenSSH per-connection server daemon (10.0.0.1:47276). Jan 28 01:24:34.081616 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 47276 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:34.088745 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:34.154886 systemd-logind[1562]: New session 18 of user core. Jan 28 01:24:34.199542 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:24:34.868053 sshd[4623]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:34.892580 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:47276.service: Deactivated successfully. Jan 28 01:24:34.895978 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:24:34.902607 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:24:34.908105 systemd-logind[1562]: Removed session 18. Jan 28 01:24:39.918762 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:47288.service - OpenSSH per-connection server daemon (10.0.0.1:47288). Jan 28 01:24:40.062964 sshd[4639]: Accepted publickey for core from 10.0.0.1 port 47288 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:40.061225 sshd[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:40.094739 systemd-logind[1562]: New session 19 of user core. Jan 28 01:24:40.108050 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:24:40.711570 sshd[4639]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:40.756019 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:47288.service: Deactivated successfully. Jan 28 01:24:40.865028 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:24:40.866151 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:24:40.882042 systemd-logind[1562]: Removed session 19. Jan 28 01:24:45.755935 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:33224.service - OpenSSH per-connection server daemon (10.0.0.1:33224). Jan 28 01:24:46.029718 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 33224 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:46.041192 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:46.091340 systemd-logind[1562]: New session 20 of user core. Jan 28 01:24:46.110135 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:24:46.805578 sshd[4655]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:46.831367 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:33224.service: Deactivated successfully. Jan 28 01:24:46.857115 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:24:46.857364 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:24:46.867827 systemd-logind[1562]: Removed session 20. Jan 28 01:24:51.853769 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:33240.service - OpenSSH per-connection server daemon (10.0.0.1:33240). Jan 28 01:24:52.187161 sshd[4677]: Accepted publickey for core from 10.0.0.1 port 33240 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:52.182168 sshd[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:52.244442 systemd-logind[1562]: New session 21 of user core. Jan 28 01:24:52.289987 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:24:53.128169 sshd[4677]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:53.173631 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:33240.service: Deactivated successfully. Jan 28 01:24:53.202362 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:24:53.206326 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:24:53.234907 systemd-logind[1562]: Removed session 21. Jan 28 01:24:56.276781 kubelet[2843]: E0128 01:24:56.273980 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:24:58.174185 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:57606.service - OpenSSH per-connection server daemon (10.0.0.1:57606). Jan 28 01:24:58.458430 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 57606 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:24:58.481873 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:58.517855 systemd-logind[1562]: New session 22 of user core. Jan 28 01:24:58.536835 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:24:59.074960 sshd[4695]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:59.086002 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:57606.service: Deactivated successfully. Jan 28 01:24:59.092021 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:24:59.093360 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:24:59.101082 systemd-logind[1562]: Removed session 22. Jan 28 01:25:04.099785 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:40012.service - OpenSSH per-connection server daemon (10.0.0.1:40012). Jan 28 01:25:04.232625 sshd[4711]: Accepted publickey for core from 10.0.0.1 port 40012 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:04.252128 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:04.326225 systemd-logind[1562]: New session 23 of user core. Jan 28 01:25:04.353433 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:25:04.983814 sshd[4711]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:05.044891 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:40012.service: Deactivated successfully. Jan 28 01:25:05.091689 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:25:05.092806 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:25:05.106675 systemd-logind[1562]: Removed session 23. Jan 28 01:25:10.008026 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:40028.service - OpenSSH per-connection server daemon (10.0.0.1:40028). Jan 28 01:25:10.183851 sshd[4727]: Accepted publickey for core from 10.0.0.1 port 40028 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:10.193092 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:10.235888 systemd-logind[1562]: New session 24 of user core. Jan 28 01:25:10.258655 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:25:10.897572 sshd[4727]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:10.933172 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:40028.service: Deactivated successfully. Jan 28 01:25:10.956471 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:25:10.969837 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:25:11.229340 systemd-logind[1562]: Removed session 24. Jan 28 01:25:15.948528 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:54046.service - OpenSSH per-connection server daemon (10.0.0.1:54046). Jan 28 01:25:16.190548 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 54046 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:16.199838 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:16.227104 systemd-logind[1562]: New session 25 of user core. Jan 28 01:25:16.272630 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:25:17.183420 sshd[4743]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:17.207458 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:54046.service: Deactivated successfully. Jan 28 01:25:17.265553 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:25:17.279216 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:25:17.293086 systemd-logind[1562]: Removed session 25. Jan 28 01:25:22.231438 systemd[1]: Started sshd@25-10.0.0.61:22-10.0.0.1:54062.service - OpenSSH per-connection server daemon (10.0.0.1:54062). Jan 28 01:25:22.279198 kubelet[2843]: E0128 01:25:22.278494 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:22.468607 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:22.478492 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:22.527825 systemd-logind[1562]: New session 26 of user core. Jan 28 01:25:22.548178 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:25:23.360178 sshd[4760]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:23.378616 systemd[1]: sshd@25-10.0.0.61:22-10.0.0.1:54062.service: Deactivated successfully. Jan 28 01:25:23.392206 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:25:23.398439 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:25:23.426784 systemd-logind[1562]: Removed session 26. Jan 28 01:25:27.296386 kubelet[2843]: E0128 01:25:27.290060 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:28.388026 systemd[1]: Started sshd@26-10.0.0.61:22-10.0.0.1:58858.service - OpenSSH per-connection server daemon (10.0.0.1:58858). Jan 28 01:25:28.543502 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 58858 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:28.551813 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:28.599198 systemd-logind[1562]: New session 27 of user core. Jan 28 01:25:28.630848 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:25:29.265944 sshd[4780]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:29.280970 systemd[1]: sshd@26-10.0.0.61:22-10.0.0.1:58858.service: Deactivated successfully. Jan 28 01:25:29.289969 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:25:29.299955 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:25:29.310880 systemd-logind[1562]: Removed session 27. Jan 28 01:25:34.279017 kubelet[2843]: E0128 01:25:34.278147 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:34.322460 systemd[1]: Started sshd@27-10.0.0.61:22-10.0.0.1:45292.service - OpenSSH per-connection server daemon (10.0.0.1:45292). Jan 28 01:25:34.447850 sshd[4797]: Accepted publickey for core from 10.0.0.1 port 45292 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:34.450948 sshd[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:34.488103 systemd-logind[1562]: New session 28 of user core. Jan 28 01:25:34.495202 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:25:35.188118 sshd[4797]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:35.207483 systemd[1]: sshd@27-10.0.0.61:22-10.0.0.1:45292.service: Deactivated successfully. Jan 28 01:25:35.234786 systemd-logind[1562]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:25:35.260359 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:25:35.267434 systemd-logind[1562]: Removed session 28. Jan 28 01:25:38.282923 kubelet[2843]: E0128 01:25:38.274703 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:40.702919 systemd[1]: Started sshd@28-10.0.0.61:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Jan 28 01:25:41.100200 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:41.125543 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:41.402376 systemd-logind[1562]: New session 29 of user core. Jan 28 01:25:41.427699 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:25:43.884149 sshd[4814]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:44.350194 systemd[1]: sshd@28-10.0.0.61:22-10.0.0.1:45298.service: Deactivated successfully. Jan 28 01:25:44.374123 systemd-logind[1562]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:25:44.387739 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:25:44.390723 systemd-logind[1562]: Removed session 29. Jan 28 01:25:48.300354 kubelet[2843]: E0128 01:25:48.300060 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:48.311114 kubelet[2843]: E0128 01:25:48.302648 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:25:48.960623 systemd[1]: Started sshd@29-10.0.0.61:22-10.0.0.1:41278.service - OpenSSH per-connection server daemon (10.0.0.1:41278). Jan 28 01:25:49.203364 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 41278 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:49.205861 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:49.235200 systemd-logind[1562]: New session 30 of user core. Jan 28 01:25:49.248751 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:25:49.835572 sshd[4831]: pam_unix(sshd:session): session closed for user core Jan 28 01:25:49.866086 systemd[1]: sshd@29-10.0.0.61:22-10.0.0.1:41278.service: Deactivated successfully. Jan 28 01:25:49.885377 systemd-logind[1562]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:25:49.886142 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:25:49.888087 systemd-logind[1562]: Removed session 30. Jan 28 01:25:55.904889 systemd[1]: Started sshd@30-10.0.0.61:22-10.0.0.1:34660.service - OpenSSH per-connection server daemon (10.0.0.1:34660). Jan 28 01:25:56.090463 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 34660 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:25:56.097585 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:25:57.613529 systemd-logind[1562]: New session 31 of user core. Jan 28 01:25:57.900674 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:25:58.015604 kubelet[2843]: E0128 01:25:58.015373 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.449s" Jan 28 01:26:01.567442 kubelet[2843]: E0128 01:26:01.567197 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.973s" Jan 28 01:26:01.992202 sshd[4847]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:02.005022 systemd-logind[1562]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:26:02.020086 systemd[1]: sshd@30-10.0.0.61:22-10.0.0.1:34660.service: Deactivated successfully. Jan 28 01:26:02.047713 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:26:02.059346 systemd-logind[1562]: Removed session 31. Jan 28 01:26:03.278978 kubelet[2843]: E0128 01:26:03.278176 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:07.087782 systemd[1]: Started sshd@31-10.0.0.61:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). Jan 28 01:26:07.303647 sshd[4865]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:07.321831 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:07.368542 systemd-logind[1562]: New session 32 of user core. Jan 28 01:26:07.401940 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:26:08.264376 sshd[4865]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:08.294176 systemd[1]: sshd@31-10.0.0.61:22-10.0.0.1:47728.service: Deactivated successfully. Jan 28 01:26:08.317708 systemd-logind[1562]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:26:08.319552 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:26:08.324496 systemd-logind[1562]: Removed session 32. Jan 28 01:26:09.277041 kubelet[2843]: E0128 01:26:09.275598 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:13.311725 systemd[1]: Started sshd@32-10.0.0.61:22-10.0.0.1:46772.service - OpenSSH per-connection server daemon (10.0.0.1:46772). Jan 28 01:26:13.459488 sshd[4882]: Accepted publickey for core from 10.0.0.1 port 46772 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:13.468805 sshd[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:13.521910 systemd-logind[1562]: New session 33 of user core. Jan 28 01:26:13.562203 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:26:14.234644 sshd[4882]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:14.266927 systemd[1]: sshd@32-10.0.0.61:22-10.0.0.1:46772.service: Deactivated successfully. Jan 28 01:26:14.286897 systemd-logind[1562]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:26:14.289567 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:26:14.294535 systemd-logind[1562]: Removed session 33. Jan 28 01:26:19.268749 systemd[1]: Started sshd@33-10.0.0.61:22-10.0.0.1:46784.service - OpenSSH per-connection server daemon (10.0.0.1:46784). Jan 28 01:26:19.512449 sshd[4900]: Accepted publickey for core from 10.0.0.1 port 46784 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:19.516111 sshd[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:19.568053 systemd-logind[1562]: New session 34 of user core. Jan 28 01:26:19.597625 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:26:20.400667 sshd[4900]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:20.416012 systemd[1]: sshd@33-10.0.0.61:22-10.0.0.1:46784.service: Deactivated successfully. Jan 28 01:26:20.452599 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:26:20.458788 systemd-logind[1562]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:26:20.476839 systemd-logind[1562]: Removed session 34. Jan 28 01:26:25.450041 systemd[1]: Started sshd@34-10.0.0.61:22-10.0.0.1:41130.service - OpenSSH per-connection server daemon (10.0.0.1:41130). Jan 28 01:26:25.590995 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 41130 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:25.597047 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:25.628856 systemd-logind[1562]: New session 35 of user core. Jan 28 01:26:25.663153 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:26:26.390869 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:26.426919 systemd[1]: sshd@34-10.0.0.61:22-10.0.0.1:41130.service: Deactivated successfully. Jan 28 01:26:26.429990 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:26:26.460477 systemd-logind[1562]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:26:26.475081 systemd-logind[1562]: Removed session 35. Jan 28 01:26:29.277766 kubelet[2843]: E0128 01:26:29.276908 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:31.458829 systemd[1]: Started sshd@35-10.0.0.61:22-10.0.0.1:41132.service - OpenSSH per-connection server daemon (10.0.0.1:41132). Jan 28 01:26:31.622905 sshd[4937]: Accepted publickey for core from 10.0.0.1 port 41132 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:31.628704 sshd[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:31.692844 systemd-logind[1562]: New session 36 of user core. Jan 28 01:26:31.728596 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:26:32.284860 kubelet[2843]: E0128 01:26:32.280941 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:32.828703 sshd[4937]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:32.873102 systemd[1]: sshd@35-10.0.0.61:22-10.0.0.1:41132.service: Deactivated successfully. Jan 28 01:26:32.908852 systemd-logind[1562]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:26:32.920192 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:26:32.983184 systemd-logind[1562]: Removed session 36. Jan 28 01:26:37.835913 systemd[1]: Started sshd@36-10.0.0.61:22-10.0.0.1:53342.service - OpenSSH per-connection server daemon (10.0.0.1:53342). Jan 28 01:26:37.908696 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 53342 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:37.924901 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:37.973638 systemd-logind[1562]: New session 37 of user core. Jan 28 01:26:37.988816 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:26:38.798627 sshd[4955]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:38.813870 systemd[1]: sshd@36-10.0.0.61:22-10.0.0.1:53342.service: Deactivated successfully. Jan 28 01:26:38.824033 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:26:38.833075 systemd-logind[1562]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:26:38.851898 systemd-logind[1562]: Removed session 37. Jan 28 01:26:40.277209 kubelet[2843]: E0128 01:26:40.273831 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:43.839213 systemd[1]: Started sshd@37-10.0.0.61:22-10.0.0.1:59452.service - OpenSSH per-connection server daemon (10.0.0.1:59452). Jan 28 01:26:44.039681 sshd[4972]: Accepted publickey for core from 10.0.0.1 port 59452 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:44.044055 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:44.098727 systemd-logind[1562]: New session 38 of user core. Jan 28 01:26:44.125915 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:26:44.600105 sshd[4972]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:44.609157 systemd[1]: sshd@37-10.0.0.61:22-10.0.0.1:59452.service: Deactivated successfully. Jan 28 01:26:44.625854 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:26:44.634059 systemd-logind[1562]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:26:44.644614 systemd-logind[1562]: Removed session 38. Jan 28 01:26:51.785001 systemd[1]: Started sshd@38-10.0.0.61:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). Jan 28 01:26:53.184955 kubelet[2843]: E0128 01:26:53.171351 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.871s" Jan 28 01:26:53.468053 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:53.494128 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:53.625646 systemd-logind[1562]: New session 39 of user core. Jan 28 01:26:53.679754 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:26:54.812450 sshd[4988]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:54.836197 systemd[1]: Started sshd@39-10.0.0.61:22-10.0.0.1:44780.service - OpenSSH per-connection server daemon (10.0.0.1:44780). Jan 28 01:26:54.844462 systemd[1]: sshd@38-10.0.0.61:22-10.0.0.1:59462.service: Deactivated successfully. Jan 28 01:26:54.874092 systemd-logind[1562]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:26:54.919369 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:26:54.924132 systemd-logind[1562]: Removed session 39. Jan 28 01:26:55.026386 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 44780 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:55.030029 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:55.069678 systemd-logind[1562]: New session 40 of user core. Jan 28 01:26:55.086175 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:26:55.276351 kubelet[2843]: E0128 01:26:55.275171 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:56.280610 sshd[5001]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:56.404004 systemd[1]: Started sshd@40-10.0.0.61:22-10.0.0.1:44796.service - OpenSSH per-connection server daemon (10.0.0.1:44796). Jan 28 01:26:56.424007 systemd[1]: sshd@39-10.0.0.61:22-10.0.0.1:44780.service: Deactivated successfully. Jan 28 01:26:56.490026 systemd-logind[1562]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:26:56.503035 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:26:56.518442 systemd-logind[1562]: Removed session 40. Jan 28 01:26:56.725112 sshd[5015]: Accepted publickey for core from 10.0.0.1 port 44796 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:26:56.777071 sshd[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:56.820968 systemd-logind[1562]: New session 41 of user core. Jan 28 01:26:56.877178 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:26:57.276113 kubelet[2843]: E0128 01:26:57.275885 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:26:57.991739 sshd[5015]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:58.017995 systemd[1]: sshd@40-10.0.0.61:22-10.0.0.1:44796.service: Deactivated successfully. Jan 28 01:26:58.048924 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:26:58.067046 systemd-logind[1562]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:26:58.082880 systemd-logind[1562]: Removed session 41. Jan 28 01:27:03.015110 systemd[1]: Started sshd@41-10.0.0.61:22-10.0.0.1:57792.service - OpenSSH per-connection server daemon (10.0.0.1:57792). Jan 28 01:27:03.267457 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 57792 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:03.282220 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:03.323918 systemd-logind[1562]: New session 42 of user core. Jan 28 01:27:03.381862 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:27:03.953671 sshd[5035]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:03.966919 systemd[1]: sshd@41-10.0.0.61:22-10.0.0.1:57792.service: Deactivated successfully. Jan 28 01:27:03.983981 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:27:03.997171 systemd-logind[1562]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:27:04.019677 systemd-logind[1562]: Removed session 42. Jan 28 01:27:05.289817 kubelet[2843]: E0128 01:27:05.288796 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:06.288976 kubelet[2843]: E0128 01:27:06.286998 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:09.017915 systemd[1]: Started sshd@42-10.0.0.61:22-10.0.0.1:57808.service - OpenSSH per-connection server daemon (10.0.0.1:57808). Jan 28 01:27:09.372160 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 57808 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:09.371531 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:09.424394 systemd-logind[1562]: New session 43 of user core. Jan 28 01:27:09.450472 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:27:10.015789 sshd[5050]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:10.047186 systemd[1]: sshd@42-10.0.0.61:22-10.0.0.1:57808.service: Deactivated successfully. Jan 28 01:27:10.058178 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:27:10.060091 systemd-logind[1562]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:27:10.077227 systemd-logind[1562]: Removed session 43. Jan 28 01:27:15.067505 systemd[1]: Started sshd@43-10.0.0.61:22-10.0.0.1:55732.service - OpenSSH per-connection server daemon (10.0.0.1:55732). Jan 28 01:27:15.387183 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 55732 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:15.402026 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:15.479722 systemd-logind[1562]: New session 44 of user core. Jan 28 01:27:15.512584 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:27:16.318907 sshd[5066]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:16.341815 systemd[1]: sshd@43-10.0.0.61:22-10.0.0.1:55732.service: Deactivated successfully. Jan 28 01:27:16.375393 systemd-logind[1562]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:27:16.376919 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:27:16.389111 systemd-logind[1562]: Removed session 44. Jan 28 01:27:21.293669 kubelet[2843]: E0128 01:27:21.281435 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:21.408538 systemd[1]: Started sshd@44-10.0.0.61:22-10.0.0.1:55738.service - OpenSSH per-connection server daemon (10.0.0.1:55738). Jan 28 01:27:21.585988 sshd[5081]: Accepted publickey for core from 10.0.0.1 port 55738 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:21.598963 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:21.642163 systemd-logind[1562]: New session 45 of user core. Jan 28 01:27:21.662895 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:27:22.236045 sshd[5081]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:22.300945 systemd[1]: sshd@44-10.0.0.61:22-10.0.0.1:55738.service: Deactivated successfully. Jan 28 01:27:22.323581 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:27:22.325642 systemd-logind[1562]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:27:22.419100 systemd-logind[1562]: Removed session 45. Jan 28 01:27:27.303553 systemd[1]: Started sshd@45-10.0.0.61:22-10.0.0.1:44196.service - OpenSSH per-connection server daemon (10.0.0.1:44196). Jan 28 01:27:27.501821 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 44196 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:27.506885 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:27.534890 systemd-logind[1562]: New session 46 of user core. Jan 28 01:27:27.592922 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:27:28.092175 sshd[5099]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:28.113990 systemd[1]: sshd@45-10.0.0.61:22-10.0.0.1:44196.service: Deactivated successfully. Jan 28 01:27:28.152672 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:27:28.165387 systemd-logind[1562]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:27:28.172316 systemd-logind[1562]: Removed session 46. Jan 28 01:27:33.144686 systemd[1]: Started sshd@46-10.0.0.61:22-10.0.0.1:55696.service - OpenSSH per-connection server daemon (10.0.0.1:55696). Jan 28 01:27:33.322514 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 55696 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:33.325962 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:33.364969 systemd-logind[1562]: New session 47 of user core. Jan 28 01:27:33.384051 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:27:33.971992 sshd[5116]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:33.987460 systemd[1]: sshd@46-10.0.0.61:22-10.0.0.1:55696.service: Deactivated successfully. Jan 28 01:27:34.019504 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:27:34.097744 systemd-logind[1562]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:27:34.117118 systemd-logind[1562]: Removed session 47. Jan 28 01:27:39.001640 systemd[1]: Started sshd@47-10.0.0.61:22-10.0.0.1:55702.service - OpenSSH per-connection server daemon (10.0.0.1:55702). Jan 28 01:27:39.092938 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 55702 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:39.097769 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:39.128564 systemd-logind[1562]: New session 48 of user core. Jan 28 01:27:39.139697 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 28 01:27:39.726300 sshd[5133]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:39.768740 systemd[1]: sshd@47-10.0.0.61:22-10.0.0.1:55702.service: Deactivated successfully. Jan 28 01:27:39.807597 systemd[1]: session-48.scope: Deactivated successfully. Jan 28 01:27:39.810662 systemd-logind[1562]: Session 48 logged out. Waiting for processes to exit. Jan 28 01:27:39.822603 systemd-logind[1562]: Removed session 48. Jan 28 01:27:40.279941 kubelet[2843]: E0128 01:27:40.279196 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:44.779513 systemd[1]: Started sshd@48-10.0.0.61:22-10.0.0.1:50704.service - OpenSSH per-connection server daemon (10.0.0.1:50704). Jan 28 01:27:44.921937 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 50704 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:44.926377 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:44.953122 systemd-logind[1562]: New session 49 of user core. Jan 28 01:27:44.962814 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 28 01:27:45.603411 sshd[5149]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:45.624055 systemd[1]: sshd@48-10.0.0.61:22-10.0.0.1:50704.service: Deactivated successfully. Jan 28 01:27:45.651511 systemd-logind[1562]: Session 49 logged out. Waiting for processes to exit. Jan 28 01:27:45.659161 systemd[1]: session-49.scope: Deactivated successfully. Jan 28 01:27:45.671116 systemd-logind[1562]: Removed session 49. Jan 28 01:27:46.284026 kubelet[2843]: E0128 01:27:46.279436 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:50.627146 systemd[1]: Started sshd@49-10.0.0.61:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720). Jan 28 01:27:50.750316 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:50.752659 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:50.788361 systemd-logind[1562]: New session 50 of user core. Jan 28 01:27:50.812446 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 28 01:27:51.156625 sshd[5164]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:51.191870 systemd[1]: sshd@49-10.0.0.61:22-10.0.0.1:50720.service: Deactivated successfully. Jan 28 01:27:51.200824 systemd-logind[1562]: Session 50 logged out. Waiting for processes to exit. Jan 28 01:27:51.213873 systemd[1]: session-50.scope: Deactivated successfully. Jan 28 01:27:51.224993 systemd-logind[1562]: Removed session 50. Jan 28 01:27:56.187907 systemd[1]: Started sshd@50-10.0.0.61:22-10.0.0.1:34052.service - OpenSSH per-connection server daemon (10.0.0.1:34052). Jan 28 01:27:56.290355 kubelet[2843]: E0128 01:27:56.285930 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:27:56.398424 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 34052 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:27:56.408753 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:56.465809 systemd-logind[1562]: New session 51 of user core. Jan 28 01:27:56.485674 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 28 01:27:57.032579 sshd[5179]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:57.062967 systemd-logind[1562]: Session 51 logged out. Waiting for processes to exit. Jan 28 01:27:57.064457 systemd[1]: sshd@50-10.0.0.61:22-10.0.0.1:34052.service: Deactivated successfully. Jan 28 01:27:57.087929 systemd[1]: session-51.scope: Deactivated successfully. Jan 28 01:27:57.098151 systemd-logind[1562]: Removed session 51. Jan 28 01:28:02.102538 systemd[1]: Started sshd@51-10.0.0.61:22-10.0.0.1:34068.service - OpenSSH per-connection server daemon (10.0.0.1:34068). Jan 28 01:28:02.219074 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 34068 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:02.225723 sshd[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:02.269514 systemd-logind[1562]: New session 52 of user core. Jan 28 01:28:02.282999 kubelet[2843]: E0128 01:28:02.278122 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:02.292990 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 28 01:28:02.770484 sshd[5196]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:02.793415 systemd[1]: sshd@51-10.0.0.61:22-10.0.0.1:34068.service: Deactivated successfully. Jan 28 01:28:02.799439 systemd[1]: session-52.scope: Deactivated successfully. Jan 28 01:28:02.819564 systemd-logind[1562]: Session 52 logged out. Waiting for processes to exit. Jan 28 01:28:02.830841 systemd-logind[1562]: Removed session 52. Jan 28 01:28:07.275121 kubelet[2843]: E0128 01:28:07.274955 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:07.806813 systemd[1]: Started sshd@52-10.0.0.61:22-10.0.0.1:48172.service - OpenSSH per-connection server daemon (10.0.0.1:48172). Jan 28 01:28:07.949412 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 48172 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:07.959561 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:07.989625 systemd-logind[1562]: New session 53 of user core. Jan 28 01:28:08.005764 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 28 01:28:08.531652 sshd[5211]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:08.542014 systemd[1]: sshd@52-10.0.0.61:22-10.0.0.1:48172.service: Deactivated successfully. Jan 28 01:28:08.566144 systemd-logind[1562]: Session 53 logged out. Waiting for processes to exit. Jan 28 01:28:08.566698 systemd[1]: session-53.scope: Deactivated successfully. Jan 28 01:28:08.579694 systemd-logind[1562]: Removed session 53. Jan 28 01:28:11.286461 kubelet[2843]: E0128 01:28:11.285703 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:13.550708 systemd[1]: Started sshd@53-10.0.0.61:22-10.0.0.1:53888.service - OpenSSH per-connection server daemon (10.0.0.1:53888). Jan 28 01:28:13.685613 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:13.704500 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:13.734312 systemd-logind[1562]: New session 54 of user core. Jan 28 01:28:13.743020 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 28 01:28:16.715011 kubelet[2843]: E0128 01:28:16.678918 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.364s" Jan 28 01:28:21.101844 kubelet[2843]: E0128 01:28:21.091365 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.412s" Jan 28 01:28:21.285038 sshd[5226]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:21.458459 systemd[1]: sshd@53-10.0.0.61:22-10.0.0.1:53888.service: Deactivated successfully. Jan 28 01:28:21.631703 systemd-logind[1562]: Session 54 logged out. Waiting for processes to exit. Jan 28 01:28:21.639894 systemd[1]: session-54.scope: Deactivated successfully. Jan 28 01:28:21.796806 systemd-logind[1562]: Removed session 54. Jan 28 01:28:26.867666 systemd[1]: Started sshd@54-10.0.0.61:22-10.0.0.1:47932.service - OpenSSH per-connection server daemon (10.0.0.1:47932). Jan 28 01:28:27.700167 kubelet[2843]: E0128 01:28:27.699912 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.429s" Jan 28 01:28:27.866500 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 47932 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:27.881809 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:30.669134 systemd-logind[1562]: New session 55 of user core. Jan 28 01:28:30.710174 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 28 01:28:33.098036 kubelet[2843]: E0128 01:28:33.096149 2843 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.377s" Jan 28 01:28:33.160846 kubelet[2843]: E0128 01:28:33.155948 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:33.172131 kubelet[2843]: E0128 01:28:33.171728 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:33.668683 sshd[5243]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:33.686941 systemd[1]: sshd@54-10.0.0.61:22-10.0.0.1:47932.service: Deactivated successfully. Jan 28 01:28:33.704870 systemd[1]: session-55.scope: Deactivated successfully. Jan 28 01:28:33.708962 systemd-logind[1562]: Session 55 logged out. Waiting for processes to exit. Jan 28 01:28:33.717751 systemd-logind[1562]: Removed session 55. Jan 28 01:28:38.792770 systemd[1]: Started sshd@55-10.0.0.61:22-10.0.0.1:59176.service - OpenSSH per-connection server daemon (10.0.0.1:59176). Jan 28 01:28:38.992111 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 59176 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:38.993493 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:39.056829 systemd-logind[1562]: New session 56 of user core. Jan 28 01:28:39.067880 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 28 01:28:39.766870 sshd[5261]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:39.795590 systemd[1]: sshd@55-10.0.0.61:22-10.0.0.1:59176.service: Deactivated successfully. Jan 28 01:28:39.835005 systemd[1]: session-56.scope: Deactivated successfully. Jan 28 01:28:39.848152 systemd-logind[1562]: Session 56 logged out. Waiting for processes to exit. Jan 28 01:28:39.891097 systemd-logind[1562]: Removed session 56. Jan 28 01:28:44.852596 systemd[1]: Started sshd@56-10.0.0.61:22-10.0.0.1:50122.service - OpenSSH per-connection server daemon (10.0.0.1:50122). Jan 28 01:28:45.092876 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 50122 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:45.098791 sshd[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:45.152794 systemd-logind[1562]: New session 57 of user core. Jan 28 01:28:45.175914 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 28 01:28:45.738928 sshd[5277]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:45.779090 systemd[1]: sshd@56-10.0.0.61:22-10.0.0.1:50122.service: Deactivated successfully. Jan 28 01:28:45.805112 systemd[1]: session-57.scope: Deactivated successfully. Jan 28 01:28:45.808518 systemd-logind[1562]: Session 57 logged out. Waiting for processes to exit. Jan 28 01:28:45.810580 systemd-logind[1562]: Removed session 57. Jan 28 01:28:46.279569 kubelet[2843]: E0128 01:28:46.276018 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:49.371567 kubelet[2843]: E0128 01:28:49.362364 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:28:50.807816 systemd[1]: Started sshd@57-10.0.0.61:22-10.0.0.1:50126.service - OpenSSH per-connection server daemon (10.0.0.1:50126). Jan 28 01:28:50.967015 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 50126 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:50.971216 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:51.007844 systemd-logind[1562]: New session 58 of user core. Jan 28 01:28:51.016887 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 28 01:28:51.641957 sshd[5294]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:51.663574 systemd[1]: sshd@57-10.0.0.61:22-10.0.0.1:50126.service: Deactivated successfully. Jan 28 01:28:51.677767 systemd-logind[1562]: Session 58 logged out. Waiting for processes to exit. Jan 28 01:28:51.680141 systemd[1]: session-58.scope: Deactivated successfully. Jan 28 01:28:51.694400 systemd-logind[1562]: Removed session 58. Jan 28 01:28:56.706091 systemd[1]: Started sshd@58-10.0.0.61:22-10.0.0.1:39480.service - OpenSSH per-connection server daemon (10.0.0.1:39480). Jan 28 01:28:56.912147 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 39480 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:28:56.926945 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:28:56.966858 systemd-logind[1562]: New session 59 of user core. Jan 28 01:28:56.998077 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 28 01:28:57.631016 sshd[5309]: pam_unix(sshd:session): session closed for user core Jan 28 01:28:57.637378 systemd[1]: sshd@58-10.0.0.61:22-10.0.0.1:39480.service: Deactivated successfully. Jan 28 01:28:57.656985 systemd[1]: session-59.scope: Deactivated successfully. Jan 28 01:28:57.662210 systemd-logind[1562]: Session 59 logged out. Waiting for processes to exit. Jan 28 01:28:57.669717 systemd-logind[1562]: Removed session 59. Jan 28 01:28:59.279536 kubelet[2843]: E0128 01:28:59.278806 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:02.701779 systemd[1]: Started sshd@59-10.0.0.61:22-10.0.0.1:47362.service - OpenSSH per-connection server daemon (10.0.0.1:47362). Jan 28 01:29:02.897715 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 47362 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:02.909824 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:02.956856 systemd-logind[1562]: New session 60 of user core. Jan 28 01:29:02.981644 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 28 01:29:03.530368 sshd[5327]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:03.560890 systemd[1]: sshd@59-10.0.0.61:22-10.0.0.1:47362.service: Deactivated successfully. Jan 28 01:29:03.576061 systemd[1]: session-60.scope: Deactivated successfully. Jan 28 01:29:03.576112 systemd-logind[1562]: Session 60 logged out. Waiting for processes to exit. Jan 28 01:29:03.593714 systemd-logind[1562]: Removed session 60. Jan 28 01:29:08.589883 systemd[1]: Started sshd@60-10.0.0.61:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Jan 28 01:29:08.787485 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:08.798951 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:08.838704 systemd-logind[1562]: New session 61 of user core. Jan 28 01:29:08.868088 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 28 01:29:09.799400 sshd[5342]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:09.817076 systemd[1]: sshd@60-10.0.0.61:22-10.0.0.1:47380.service: Deactivated successfully. Jan 28 01:29:09.827507 systemd-logind[1562]: Session 61 logged out. Waiting for processes to exit. Jan 28 01:29:09.868978 systemd[1]: Started sshd@61-10.0.0.61:22-10.0.0.1:47388.service - OpenSSH per-connection server daemon (10.0.0.1:47388). Jan 28 01:29:09.869660 systemd[1]: session-61.scope: Deactivated successfully. Jan 28 01:29:09.882493 systemd-logind[1562]: Removed session 61. Jan 28 01:29:09.988339 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 47388 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:09.997541 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:10.044198 systemd-logind[1562]: New session 62 of user core. Jan 28 01:29:10.059194 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 28 01:29:13.284053 kubelet[2843]: E0128 01:29:13.283660 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:13.815060 sshd[5357]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:13.832936 systemd[1]: sshd@61-10.0.0.61:22-10.0.0.1:47388.service: Deactivated successfully. Jan 28 01:29:13.877122 systemd-logind[1562]: Session 62 logged out. Waiting for processes to exit. Jan 28 01:29:13.883198 systemd[1]: session-62.scope: Deactivated successfully. Jan 28 01:29:13.909182 systemd[1]: Started sshd@62-10.0.0.61:22-10.0.0.1:52818.service - OpenSSH per-connection server daemon (10.0.0.1:52818). Jan 28 01:29:13.927959 systemd-logind[1562]: Removed session 62. Jan 28 01:29:14.217929 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 52818 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:14.236446 sshd[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:14.284903 systemd-logind[1562]: New session 63 of user core. Jan 28 01:29:14.338928 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 28 01:29:18.670799 sshd[5371]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:18.771404 systemd[1]: Started sshd@63-10.0.0.61:22-10.0.0.1:52840.service - OpenSSH per-connection server daemon (10.0.0.1:52840). Jan 28 01:29:18.779593 systemd[1]: sshd@62-10.0.0.61:22-10.0.0.1:52818.service: Deactivated successfully. Jan 28 01:29:18.874113 systemd[1]: session-63.scope: Deactivated successfully. Jan 28 01:29:18.878614 systemd-logind[1562]: Session 63 logged out. Waiting for processes to exit. Jan 28 01:29:18.889626 systemd-logind[1562]: Removed session 63. Jan 28 01:29:19.163977 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 52840 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:19.174965 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:19.216595 systemd-logind[1562]: New session 64 of user core. Jan 28 01:29:19.255086 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 28 01:29:21.432022 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:21.485848 systemd[1]: Started sshd@64-10.0.0.61:22-10.0.0.1:52858.service - OpenSSH per-connection server daemon (10.0.0.1:52858). Jan 28 01:29:21.500913 systemd[1]: sshd@63-10.0.0.61:22-10.0.0.1:52840.service: Deactivated successfully. Jan 28 01:29:21.550537 systemd[1]: session-64.scope: Deactivated successfully. Jan 28 01:29:21.595629 systemd-logind[1562]: Session 64 logged out. Waiting for processes to exit. Jan 28 01:29:21.618433 systemd-logind[1562]: Removed session 64. Jan 28 01:29:21.878349 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 52858 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:21.882212 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:21.931166 systemd-logind[1562]: New session 65 of user core. Jan 28 01:29:21.946505 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 28 01:29:23.678545 sshd[5405]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:23.698107 systemd[1]: sshd@64-10.0.0.61:22-10.0.0.1:52858.service: Deactivated successfully. Jan 28 01:29:23.736046 systemd[1]: session-65.scope: Deactivated successfully. Jan 28 01:29:23.737625 systemd-logind[1562]: Session 65 logged out. Waiting for processes to exit. Jan 28 01:29:23.756216 systemd-logind[1562]: Removed session 65. Jan 28 01:29:24.305805 kubelet[2843]: E0128 01:29:24.305526 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:27.280373 kubelet[2843]: E0128 01:29:27.277366 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:28.718913 systemd[1]: Started sshd@65-10.0.0.61:22-10.0.0.1:44548.service - OpenSSH per-connection server daemon (10.0.0.1:44548). Jan 28 01:29:28.959411 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 44548 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:28.967475 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:29.000650 systemd-logind[1562]: New session 66 of user core. Jan 28 01:29:29.017802 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 28 01:29:29.809702 sshd[5428]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:29.822871 systemd[1]: sshd@65-10.0.0.61:22-10.0.0.1:44548.service: Deactivated successfully. Jan 28 01:29:29.863367 systemd[1]: session-66.scope: Deactivated successfully. Jan 28 01:29:29.873064 systemd-logind[1562]: Session 66 logged out. Waiting for processes to exit. Jan 28 01:29:29.875397 systemd-logind[1562]: Removed session 66. Jan 28 01:29:34.887721 systemd[1]: Started sshd@66-10.0.0.61:22-10.0.0.1:46732.service - OpenSSH per-connection server daemon (10.0.0.1:46732). Jan 28 01:29:35.013090 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 46732 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:35.016697 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:35.109662 systemd-logind[1562]: New session 67 of user core. Jan 28 01:29:35.172663 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 28 01:29:36.309334 sshd[5445]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:36.385776 systemd[1]: sshd@66-10.0.0.61:22-10.0.0.1:46732.service: Deactivated successfully. Jan 28 01:29:36.426356 systemd-logind[1562]: Session 67 logged out. Waiting for processes to exit. Jan 28 01:29:36.436733 systemd[1]: session-67.scope: Deactivated successfully. Jan 28 01:29:36.486965 systemd-logind[1562]: Removed session 67. Jan 28 01:29:41.366341 systemd[1]: Started sshd@67-10.0.0.61:22-10.0.0.1:46752.service - OpenSSH per-connection server daemon (10.0.0.1:46752). Jan 28 01:29:41.652710 sshd[5463]: Accepted publickey for core from 10.0.0.1 port 46752 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:41.657596 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:41.706215 systemd-logind[1562]: New session 68 of user core. Jan 28 01:29:41.747783 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 28 01:29:42.494703 sshd[5463]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:42.518532 systemd[1]: sshd@67-10.0.0.61:22-10.0.0.1:46752.service: Deactivated successfully. Jan 28 01:29:42.544129 systemd[1]: session-68.scope: Deactivated successfully. Jan 28 01:29:42.577185 systemd-logind[1562]: Session 68 logged out. Waiting for processes to exit. Jan 28 01:29:42.593727 systemd-logind[1562]: Removed session 68. Jan 28 01:29:47.526641 systemd[1]: Started sshd@68-10.0.0.61:22-10.0.0.1:50178.service - OpenSSH per-connection server daemon (10.0.0.1:50178). Jan 28 01:29:47.790747 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 50178 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:47.789581 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:47.825198 systemd-logind[1562]: New session 69 of user core. Jan 28 01:29:47.839503 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 28 01:29:48.528758 sshd[5478]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:48.545036 systemd[1]: sshd@68-10.0.0.61:22-10.0.0.1:50178.service: Deactivated successfully. Jan 28 01:29:48.555563 systemd[1]: session-69.scope: Deactivated successfully. Jan 28 01:29:48.564419 systemd-logind[1562]: Session 69 logged out. Waiting for processes to exit. Jan 28 01:29:48.574477 systemd-logind[1562]: Removed session 69. Jan 28 01:29:53.595546 systemd[1]: Started sshd@69-10.0.0.61:22-10.0.0.1:41134.service - OpenSSH per-connection server daemon (10.0.0.1:41134). Jan 28 01:29:53.826638 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 41134 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:53.833659 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:29:53.907037 systemd-logind[1562]: New session 70 of user core. Jan 28 01:29:53.937618 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 28 01:29:54.294829 kubelet[2843]: E0128 01:29:54.276874 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:54.294829 kubelet[2843]: E0128 01:29:54.287101 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:54.650920 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 28 01:29:54.673913 systemd[1]: sshd@69-10.0.0.61:22-10.0.0.1:41134.service: Deactivated successfully. Jan 28 01:29:54.709577 systemd[1]: session-70.scope: Deactivated successfully. Jan 28 01:29:54.717577 systemd-logind[1562]: Session 70 logged out. Waiting for processes to exit. Jan 28 01:29:54.729531 systemd-logind[1562]: Removed session 70. Jan 28 01:29:59.274880 kubelet[2843]: E0128 01:29:59.274531 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:29:59.708398 systemd[1]: Started sshd@70-10.0.0.61:22-10.0.0.1:41148.service - OpenSSH per-connection server daemon (10.0.0.1:41148). Jan 28 01:29:59.935961 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 41148 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:29:59.963874 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:00.003319 systemd-logind[1562]: New session 71 of user core. Jan 28 01:30:00.017545 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 28 01:30:00.281890 kubelet[2843]: E0128 01:30:00.280530 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:00.535728 sshd[5512]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:00.577875 systemd[1]: sshd@70-10.0.0.61:22-10.0.0.1:41148.service: Deactivated successfully. Jan 28 01:30:00.598382 systemd-logind[1562]: Session 71 logged out. Waiting for processes to exit. Jan 28 01:30:00.613151 systemd[1]: session-71.scope: Deactivated successfully. Jan 28 01:30:00.618112 systemd-logind[1562]: Removed session 71. Jan 28 01:30:05.568838 systemd[1]: Started sshd@71-10.0.0.61:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Jan 28 01:30:05.714797 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:05.719732 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:05.769911 systemd-logind[1562]: New session 72 of user core. Jan 28 01:30:05.801407 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 28 01:30:06.388914 sshd[5528]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:06.461353 systemd[1]: sshd@71-10.0.0.61:22-10.0.0.1:37192.service: Deactivated successfully. Jan 28 01:30:06.483840 systemd[1]: session-72.scope: Deactivated successfully. Jan 28 01:30:06.486956 systemd-logind[1562]: Session 72 logged out. Waiting for processes to exit. Jan 28 01:30:06.493579 systemd-logind[1562]: Removed session 72. Jan 28 01:30:11.463130 systemd[1]: Started sshd@72-10.0.0.61:22-10.0.0.1:37204.service - OpenSSH per-connection server daemon (10.0.0.1:37204). Jan 28 01:30:11.630751 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 37204 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:11.644008 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:11.723469 systemd-logind[1562]: New session 73 of user core. Jan 28 01:30:11.758934 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 28 01:30:12.651492 sshd[5544]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:12.686718 systemd[1]: sshd@72-10.0.0.61:22-10.0.0.1:37204.service: Deactivated successfully. Jan 28 01:30:12.704054 systemd[1]: session-73.scope: Deactivated successfully. Jan 28 01:30:12.718551 systemd-logind[1562]: Session 73 logged out. Waiting for processes to exit. Jan 28 01:30:12.727167 systemd-logind[1562]: Removed session 73. Jan 28 01:30:15.274715 kubelet[2843]: E0128 01:30:15.274667 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:17.699630 systemd[1]: Started sshd@73-10.0.0.61:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). Jan 28 01:30:17.934840 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:17.944670 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:17.986091 systemd-logind[1562]: New session 74 of user core. Jan 28 01:30:18.002514 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 28 01:30:18.351704 kubelet[2843]: E0128 01:30:18.279007 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:18.903324 sshd[5559]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:18.946066 systemd[1]: sshd@73-10.0.0.61:22-10.0.0.1:60352.service: Deactivated successfully. Jan 28 01:30:18.976120 systemd[1]: session-74.scope: Deactivated successfully. Jan 28 01:30:18.981370 systemd-logind[1562]: Session 74 logged out. Waiting for processes to exit. Jan 28 01:30:18.990445 systemd-logind[1562]: Removed session 74. Jan 28 01:30:22.437795 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 28 01:30:22.660389 systemd-tmpfiles[5577]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:30:22.665212 systemd-tmpfiles[5577]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:30:22.673478 systemd-tmpfiles[5577]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:30:22.675454 systemd-tmpfiles[5577]: ACLs are not supported, ignoring. Jan 28 01:30:22.679217 systemd-tmpfiles[5577]: ACLs are not supported, ignoring. Jan 28 01:30:22.697851 systemd-tmpfiles[5577]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:30:22.697867 systemd-tmpfiles[5577]: Skipping /boot Jan 28 01:30:22.801571 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 28 01:30:22.802786 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 28 01:30:23.968888 systemd[1]: Started sshd@74-10.0.0.61:22-10.0.0.1:48686.service - OpenSSH per-connection server daemon (10.0.0.1:48686). Jan 28 01:30:24.227618 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 48686 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:24.258517 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:24.323494 systemd-logind[1562]: New session 75 of user core. Jan 28 01:30:24.359833 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 28 01:30:25.536642 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:25.594910 systemd[1]: sshd@74-10.0.0.61:22-10.0.0.1:48686.service: Deactivated successfully. Jan 28 01:30:25.651702 systemd[1]: session-75.scope: Deactivated successfully. Jan 28 01:30:25.653375 systemd-logind[1562]: Session 75 logged out. Waiting for processes to exit. Jan 28 01:30:25.682060 systemd-logind[1562]: Removed session 75. Jan 28 01:30:30.602053 systemd[1]: Started sshd@75-10.0.0.61:22-10.0.0.1:48706.service - OpenSSH per-connection server daemon (10.0.0.1:48706). Jan 28 01:30:30.849041 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 48706 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:30.872469 sshd[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:30.897016 systemd-logind[1562]: New session 76 of user core. Jan 28 01:30:30.914007 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 28 01:30:31.538450 sshd[5599]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:31.595521 systemd[1]: sshd@75-10.0.0.61:22-10.0.0.1:48706.service: Deactivated successfully. Jan 28 01:30:31.620473 systemd-logind[1562]: Session 76 logged out. Waiting for processes to exit. Jan 28 01:30:31.622427 systemd[1]: session-76.scope: Deactivated successfully. Jan 28 01:30:31.640938 systemd-logind[1562]: Removed session 76. Jan 28 01:30:36.608855 systemd[1]: Started sshd@76-10.0.0.61:22-10.0.0.1:58048.service - OpenSSH per-connection server daemon (10.0.0.1:58048). Jan 28 01:30:36.803045 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 58048 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:36.806642 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:36.839482 systemd-logind[1562]: New session 77 of user core. Jan 28 01:30:36.853431 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 28 01:30:37.347914 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:37.375672 systemd[1]: sshd@76-10.0.0.61:22-10.0.0.1:58048.service: Deactivated successfully. Jan 28 01:30:37.390582 systemd[1]: session-77.scope: Deactivated successfully. Jan 28 01:30:37.392586 systemd-logind[1562]: Session 77 logged out. Waiting for processes to exit. Jan 28 01:30:37.406573 systemd-logind[1562]: Removed session 77. Jan 28 01:30:42.421910 systemd[1]: Started sshd@77-10.0.0.61:22-10.0.0.1:47252.service - OpenSSH per-connection server daemon (10.0.0.1:47252). Jan 28 01:30:42.676111 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 47252 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:42.685626 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:42.727194 systemd-logind[1562]: New session 78 of user core. Jan 28 01:30:42.762227 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 28 01:30:43.612837 sshd[5629]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:43.628482 systemd[1]: sshd@77-10.0.0.61:22-10.0.0.1:47252.service: Deactivated successfully. Jan 28 01:30:43.675539 systemd-logind[1562]: Session 78 logged out. Waiting for processes to exit. Jan 28 01:30:43.677891 systemd[1]: session-78.scope: Deactivated successfully. Jan 28 01:30:43.710810 systemd-logind[1562]: Removed session 78. Jan 28 01:30:44.329467 kubelet[2843]: E0128 01:30:44.315921 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:48.687137 systemd[1]: Started sshd@78-10.0.0.61:22-10.0.0.1:47272.service - OpenSSH per-connection server daemon (10.0.0.1:47272). Jan 28 01:30:48.870569 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 47272 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:48.888198 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:48.929543 systemd-logind[1562]: New session 79 of user core. Jan 28 01:30:48.954969 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 28 01:30:50.201363 sshd[5645]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:50.279780 systemd[1]: sshd@78-10.0.0.61:22-10.0.0.1:47272.service: Deactivated successfully. Jan 28 01:30:50.314602 systemd-logind[1562]: Session 79 logged out. Waiting for processes to exit. Jan 28 01:30:50.315874 systemd[1]: session-79.scope: Deactivated successfully. Jan 28 01:30:50.322140 systemd-logind[1562]: Removed session 79. Jan 28 01:30:51.283521 kubelet[2843]: E0128 01:30:51.281339 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:30:55.276208 systemd[1]: Started sshd@79-10.0.0.61:22-10.0.0.1:60962.service - OpenSSH per-connection server daemon (10.0.0.1:60962). Jan 28 01:30:55.813397 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 60962 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:30:55.914065 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:30:56.023702 systemd-logind[1562]: New session 80 of user core. Jan 28 01:30:56.126073 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 28 01:30:57.057941 sshd[5661]: pam_unix(sshd:session): session closed for user core Jan 28 01:30:57.080842 systemd[1]: sshd@79-10.0.0.61:22-10.0.0.1:60962.service: Deactivated successfully. Jan 28 01:30:57.091693 systemd[1]: session-80.scope: Deactivated successfully. Jan 28 01:30:57.101590 systemd-logind[1562]: Session 80 logged out. Waiting for processes to exit. Jan 28 01:30:57.119005 systemd-logind[1562]: Removed session 80. Jan 28 01:31:02.091978 systemd[1]: Started sshd@80-10.0.0.61:22-10.0.0.1:32772.service - OpenSSH per-connection server daemon (10.0.0.1:32772). Jan 28 01:31:02.392870 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 32772 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:02.426941 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:02.493369 systemd-logind[1562]: New session 81 of user core. Jan 28 01:31:02.569903 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 28 01:31:03.291846 sshd[5684]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:03.319840 systemd[1]: sshd@80-10.0.0.61:22-10.0.0.1:32772.service: Deactivated successfully. Jan 28 01:31:03.338335 systemd-logind[1562]: Session 81 logged out. Waiting for processes to exit. Jan 28 01:31:03.343854 systemd[1]: session-81.scope: Deactivated successfully. Jan 28 01:31:03.365115 systemd-logind[1562]: Removed session 81. Jan 28 01:31:08.359942 systemd[1]: Started sshd@81-10.0.0.61:22-10.0.0.1:46088.service - OpenSSH per-connection server daemon (10.0.0.1:46088). Jan 28 01:31:08.721517 sshd[5700]: Accepted publickey for core from 10.0.0.1 port 46088 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:08.758489 sshd[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:08.809212 systemd-logind[1562]: New session 82 of user core. Jan 28 01:31:08.825764 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 28 01:31:09.999227 sshd[5700]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:10.051463 systemd-logind[1562]: Session 82 logged out. Waiting for processes to exit. Jan 28 01:31:10.051846 systemd[1]: sshd@81-10.0.0.61:22-10.0.0.1:46088.service: Deactivated successfully. Jan 28 01:31:10.097974 systemd[1]: session-82.scope: Deactivated successfully. Jan 28 01:31:10.124626 systemd-logind[1562]: Removed session 82. Jan 28 01:31:15.197227 systemd[1]: Started sshd@82-10.0.0.61:22-10.0.0.1:60496.service - OpenSSH per-connection server daemon (10.0.0.1:60496). Jan 28 01:31:15.275679 kubelet[2843]: E0128 01:31:15.275560 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:15.467587 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 60496 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:15.472750 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:15.519004 systemd-logind[1562]: New session 83 of user core. Jan 28 01:31:15.593081 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 28 01:31:16.712057 sshd[5715]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:16.821765 systemd[1]: sshd@82-10.0.0.61:22-10.0.0.1:60496.service: Deactivated successfully. Jan 28 01:31:16.827225 systemd-logind[1562]: Session 83 logged out. Waiting for processes to exit. Jan 28 01:31:16.828441 systemd[1]: session-83.scope: Deactivated successfully. Jan 28 01:31:16.830715 systemd-logind[1562]: Removed session 83. Jan 28 01:31:19.279626 kubelet[2843]: E0128 01:31:19.275178 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:21.865983 systemd[1]: Started sshd@83-10.0.0.61:22-10.0.0.1:60510.service - OpenSSH per-connection server daemon (10.0.0.1:60510). Jan 28 01:31:22.315651 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 60510 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:22.336438 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:22.388505 systemd-logind[1562]: New session 84 of user core. Jan 28 01:31:22.420803 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 28 01:31:23.275892 kubelet[2843]: E0128 01:31:23.274132 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:23.497107 sshd[5730]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:23.547101 systemd[1]: sshd@83-10.0.0.61:22-10.0.0.1:60510.service: Deactivated successfully. Jan 28 01:31:23.612088 systemd[1]: session-84.scope: Deactivated successfully. Jan 28 01:31:23.624916 systemd-logind[1562]: Session 84 logged out. Waiting for processes to exit. Jan 28 01:31:23.630831 systemd-logind[1562]: Removed session 84. Jan 28 01:31:26.288954 kubelet[2843]: E0128 01:31:26.281976 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:28.607191 systemd[1]: Started sshd@84-10.0.0.61:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Jan 28 01:31:28.895310 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:28.898385 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:28.974392 systemd-logind[1562]: New session 85 of user core. Jan 28 01:31:28.989900 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 28 01:31:29.274480 kubelet[2843]: E0128 01:31:29.273684 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:29.772722 sshd[5749]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:29.822631 systemd[1]: sshd@84-10.0.0.61:22-10.0.0.1:56944.service: Deactivated successfully. Jan 28 01:31:29.890162 systemd[1]: session-85.scope: Deactivated successfully. Jan 28 01:31:29.898545 systemd-logind[1562]: Session 85 logged out. Waiting for processes to exit. Jan 28 01:31:29.917373 systemd-logind[1562]: Removed session 85. Jan 28 01:31:34.830139 systemd[1]: Started sshd@85-10.0.0.61:22-10.0.0.1:39538.service - OpenSSH per-connection server daemon (10.0.0.1:39538). Jan 28 01:31:34.956699 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 39538 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:34.959722 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:34.987048 systemd-logind[1562]: New session 86 of user core. Jan 28 01:31:35.028152 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 28 01:31:35.567077 sshd[5765]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:35.619489 systemd[1]: Started sshd@86-10.0.0.61:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). Jan 28 01:31:35.622763 systemd[1]: sshd@85-10.0.0.61:22-10.0.0.1:39538.service: Deactivated successfully. Jan 28 01:31:35.653658 systemd[1]: session-86.scope: Deactivated successfully. Jan 28 01:31:35.668032 systemd-logind[1562]: Session 86 logged out. Waiting for processes to exit. Jan 28 01:31:35.684049 systemd-logind[1562]: Removed session 86. Jan 28 01:31:35.836931 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:35.834004 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:35.889669 systemd-logind[1562]: New session 87 of user core. Jan 28 01:31:35.903959 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 28 01:31:37.280862 kubelet[2843]: E0128 01:31:37.280340 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:39.884677 containerd[1588]: time="2026-01-28T01:31:39.883693179Z" level=info msg="StopContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" with timeout 30 (s)" Jan 28 01:31:39.900750 containerd[1588]: time="2026-01-28T01:31:39.887681903Z" level=info msg="Stop container \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" with signal terminated" Jan 28 01:31:40.216874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247-rootfs.mount: Deactivated successfully. Jan 28 01:31:40.222308 containerd[1588]: time="2026-01-28T01:31:40.222066709Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:31:40.257328 containerd[1588]: time="2026-01-28T01:31:40.257207483Z" level=info msg="StopContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" with timeout 2 (s)" Jan 28 01:31:40.258533 containerd[1588]: time="2026-01-28T01:31:40.258009264Z" level=info msg="Stop container \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" with signal terminated" Jan 28 01:31:40.294374 systemd-networkd[1254]: lxc_health: Link DOWN Jan 28 01:31:40.296189 systemd-networkd[1254]: lxc_health: Lost carrier Jan 28 01:31:40.374203 containerd[1588]: time="2026-01-28T01:31:40.368008794Z" level=info msg="shim disconnected" id=1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247 namespace=k8s.io Jan 28 01:31:40.374203 containerd[1588]: time="2026-01-28T01:31:40.368140520Z" level=warning msg="cleaning up after shim disconnected" id=1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247 namespace=k8s.io Jan 28 01:31:40.374203 containerd[1588]: time="2026-01-28T01:31:40.368163984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:41.230049 containerd[1588]: time="2026-01-28T01:31:41.229517559Z" level=info msg="StopContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" returns successfully" Jan 28 01:31:41.546050 containerd[1588]: time="2026-01-28T01:31:41.537156454Z" level=info msg="StopPodSandbox for \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\"" Jan 28 01:31:41.546050 containerd[1588]: time="2026-01-28T01:31:41.537305103Z" level=info msg="Container to stop \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:41.543093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b-shm.mount: Deactivated successfully. Jan 28 01:31:41.619945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467-rootfs.mount: Deactivated successfully. Jan 28 01:31:41.687131 containerd[1588]: time="2026-01-28T01:31:41.686713564Z" level=info msg="shim disconnected" id=9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467 namespace=k8s.io Jan 28 01:31:41.687131 containerd[1588]: time="2026-01-28T01:31:41.686786070Z" level=warning msg="cleaning up after shim disconnected" id=9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467 namespace=k8s.io Jan 28 01:31:41.687131 containerd[1588]: time="2026-01-28T01:31:41.686798143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:41.891299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b-rootfs.mount: Deactivated successfully. Jan 28 01:31:41.914056 sshd[5778]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:42.151948 containerd[1588]: time="2026-01-28T01:31:42.139883757Z" level=info msg="shim disconnected" id=367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b namespace=k8s.io Jan 28 01:31:42.151948 containerd[1588]: time="2026-01-28T01:31:42.140045219Z" level=warning msg="cleaning up after shim disconnected" id=367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b namespace=k8s.io Jan 28 01:31:42.151948 containerd[1588]: time="2026-01-28T01:31:42.140063153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.152878761Z" level=info msg="StopContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" returns successfully" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154672110Z" level=info msg="StopPodSandbox for \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\"" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154718627Z" level=info msg="Container to stop \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154741510Z" level=info msg="Container to stop \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154755396Z" level=info msg="Container to stop \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154769291Z" level=info msg="Container to stop \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:42.157166 containerd[1588]: time="2026-01-28T01:31:42.154781785Z" level=info msg="Container to stop \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 28 01:31:42.158898 systemd[1]: Started sshd@87-10.0.0.61:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Jan 28 01:31:42.165084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe-shm.mount: Deactivated successfully. Jan 28 01:31:42.167387 systemd[1]: sshd@86-10.0.0.61:22-10.0.0.1:39558.service: Deactivated successfully. Jan 28 01:31:42.170884 systemd[1]: session-87.scope: Deactivated successfully. Jan 28 01:31:42.176924 systemd-logind[1562]: Session 87 logged out. Waiting for processes to exit. Jan 28 01:31:42.181800 systemd-logind[1562]: Removed session 87. Jan 28 01:31:42.208090 containerd[1588]: time="2026-01-28T01:31:42.207744161Z" level=info msg="TearDown network for sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" successfully" Jan 28 01:31:42.208090 containerd[1588]: time="2026-01-28T01:31:42.207801478Z" level=info msg="StopPodSandbox for \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" returns successfully" Jan 28 01:31:42.255097 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:42.256631 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:42.290733 systemd-logind[1562]: New session 88 of user core. Jan 28 01:31:42.305955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe-rootfs.mount: Deactivated successfully. Jan 28 01:31:42.329577 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 28 01:31:42.341390 kubelet[2843]: I0128 01:31:42.341354 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e781658d-a892-42dd-85d0-ba5cd2e8e187-cilium-config-path\") pod \"e781658d-a892-42dd-85d0-ba5cd2e8e187\" (UID: \"e781658d-a892-42dd-85d0-ba5cd2e8e187\") " Jan 28 01:31:42.342729 kubelet[2843]: I0128 01:31:42.342145 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxlmf\" (UniqueName: \"kubernetes.io/projected/e781658d-a892-42dd-85d0-ba5cd2e8e187-kube-api-access-mxlmf\") pod \"e781658d-a892-42dd-85d0-ba5cd2e8e187\" (UID: \"e781658d-a892-42dd-85d0-ba5cd2e8e187\") " Jan 28 01:31:42.351692 kubelet[2843]: I0128 01:31:42.351441 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e781658d-a892-42dd-85d0-ba5cd2e8e187-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e781658d-a892-42dd-85d0-ba5cd2e8e187" (UID: "e781658d-a892-42dd-85d0-ba5cd2e8e187"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:31:42.440619 kubelet[2843]: I0128 01:31:42.440508 2843 scope.go:117] "RemoveContainer" containerID="1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247" Jan 28 01:31:42.443048 kubelet[2843]: I0128 01:31:42.441039 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e781658d-a892-42dd-85d0-ba5cd2e8e187-kube-api-access-mxlmf" (OuterVolumeSpecName: "kube-api-access-mxlmf") pod "e781658d-a892-42dd-85d0-ba5cd2e8e187" (UID: "e781658d-a892-42dd-85d0-ba5cd2e8e187"). InnerVolumeSpecName "kube-api-access-mxlmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:31:42.443048 kubelet[2843]: I0128 01:31:42.443024 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxlmf\" (UniqueName: \"kubernetes.io/projected/e781658d-a892-42dd-85d0-ba5cd2e8e187-kube-api-access-mxlmf\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.443048 kubelet[2843]: I0128 01:31:42.443050 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e781658d-a892-42dd-85d0-ba5cd2e8e187-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.449943 containerd[1588]: time="2026-01-28T01:31:42.449708369Z" level=info msg="RemoveContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\"" Jan 28 01:31:42.470025 containerd[1588]: time="2026-01-28T01:31:42.469444345Z" level=info msg="shim disconnected" id=2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe namespace=k8s.io Jan 28 01:31:42.470025 containerd[1588]: time="2026-01-28T01:31:42.469612169Z" level=warning msg="cleaning up after shim disconnected" id=2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe namespace=k8s.io Jan 28 01:31:42.470025 containerd[1588]: time="2026-01-28T01:31:42.469779963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:42.502226 containerd[1588]: time="2026-01-28T01:31:42.501988362Z" level=info msg="TearDown network for sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" successfully" Jan 28 01:31:42.502226 containerd[1588]: time="2026-01-28T01:31:42.502038527Z" level=info msg="StopPodSandbox for \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" returns successfully" Jan 28 01:31:42.522795 containerd[1588]: time="2026-01-28T01:31:42.522657492Z" level=info msg="RemoveContainer for \"1a30f68ac52751da8d98f2df02b25bd0cac232d8063b7951845b5334effa3247\" returns successfully" Jan 28 01:31:42.545129 systemd[1]: var-lib-kubelet-pods-e781658d\x2da892\x2d42dd\x2d85d0\x2dba5cd2e8e187-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxlmf.mount: Deactivated successfully. Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551645 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-hostproc\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551742 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsh7h\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-kube-api-access-wsh7h\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551777 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-kernel\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551801 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ca98c6-7a43-4faf-b3a4-959c7403a471-clustermesh-secrets\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551861 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-net\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.554943 kubelet[2843]: I0128 01:31:42.551883 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-etc-cni-netd\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.551906 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-cgroup\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.551929 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cni-path\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.551949 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-bpf-maps\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.551974 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-config-path\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.551997 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-hubble-tls\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555207 kubelet[2843]: I0128 01:31:42.552023 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-xtables-lock\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555515 kubelet[2843]: I0128 01:31:42.552041 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-lib-modules\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555515 kubelet[2843]: I0128 01:31:42.552059 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-run\") pod \"31ca98c6-7a43-4faf-b3a4-959c7403a471\" (UID: \"31ca98c6-7a43-4faf-b3a4-959c7403a471\") " Jan 28 01:31:42.555515 kubelet[2843]: I0128 01:31:42.552146 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.555515 kubelet[2843]: I0128 01:31:42.552189 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-hostproc" (OuterVolumeSpecName: "hostproc") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.555515 kubelet[2843]: I0128 01:31:42.554316 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cni-path" (OuterVolumeSpecName: "cni-path") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.555671 kubelet[2843]: I0128 01:31:42.554355 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.570294 kubelet[2843]: I0128 01:31:42.562112 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.571694 kubelet[2843]: I0128 01:31:42.571653 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.577701 kubelet[2843]: I0128 01:31:42.576190 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.578158 kubelet[2843]: I0128 01:31:42.576266 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.578158 kubelet[2843]: I0128 01:31:42.576283 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.578613 kubelet[2843]: I0128 01:31:42.577146 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 28 01:31:42.580520 kubelet[2843]: I0128 01:31:42.580465 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-kube-api-access-wsh7h" (OuterVolumeSpecName: "kube-api-access-wsh7h") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "kube-api-access-wsh7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:31:42.595094 kubelet[2843]: I0128 01:31:42.593115 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:31:42.609035 systemd[1]: var-lib-kubelet-pods-31ca98c6\x2d7a43\x2d4faf\x2db3a4\x2d959c7403a471-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwsh7h.mount: Deactivated successfully. Jan 28 01:31:42.609544 systemd[1]: var-lib-kubelet-pods-31ca98c6\x2d7a43\x2d4faf\x2db3a4\x2d959c7403a471-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 28 01:31:42.609995 systemd[1]: var-lib-kubelet-pods-31ca98c6\x2d7a43\x2d4faf\x2db3a4\x2d959c7403a471-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 28 01:31:42.625517 kubelet[2843]: I0128 01:31:42.625463 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:31:42.626534 kubelet[2843]: I0128 01:31:42.626099 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ca98c6-7a43-4faf-b3a4-959c7403a471-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31ca98c6-7a43-4faf-b3a4-959c7403a471" (UID: "31ca98c6-7a43-4faf-b3a4-959c7403a471"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653536 2843 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653573 2843 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ca98c6-7a43-4faf-b3a4-959c7403a471-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653591 2843 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653602 2843 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653614 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653626 2843 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653638 2843 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.654989 kubelet[2843]: I0128 01:31:42.653649 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653700 2843 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653713 2843 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653724 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653735 2843 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653748 2843 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ca98c6-7a43-4faf-b3a4-959c7403a471-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:42.655580 kubelet[2843]: I0128 01:31:42.653759 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wsh7h\" (UniqueName: \"kubernetes.io/projected/31ca98c6-7a43-4faf-b3a4-959c7403a471-kube-api-access-wsh7h\") on node \"localhost\" DevicePath \"\"" Jan 28 01:31:43.488924 kubelet[2843]: I0128 01:31:43.488812 2843 scope.go:117] "RemoveContainer" containerID="9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467" Jan 28 01:31:43.496884 containerd[1588]: time="2026-01-28T01:31:43.496171829Z" level=info msg="RemoveContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\"" Jan 28 01:31:43.513570 containerd[1588]: time="2026-01-28T01:31:43.512792750Z" level=info msg="RemoveContainer for \"9e73c7f42ea67b9d52d135f7f7fc6774bcbc48e5da365013487d60894f00c467\" returns successfully" Jan 28 01:31:43.513691 kubelet[2843]: I0128 01:31:43.513441 2843 scope.go:117] "RemoveContainer" containerID="218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a" Jan 28 01:31:43.515730 containerd[1588]: time="2026-01-28T01:31:43.515645697Z" level=info msg="RemoveContainer for \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\"" Jan 28 01:31:43.521295 containerd[1588]: time="2026-01-28T01:31:43.521103379Z" level=info msg="RemoveContainer for \"218fe2a45a1e30d524a0e06b1f1c4c355b5e05cce85a7d392c6908e12777d34a\" returns successfully" Jan 28 01:31:43.521487 kubelet[2843]: I0128 01:31:43.521442 2843 scope.go:117] "RemoveContainer" containerID="e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149" Jan 28 01:31:43.522917 containerd[1588]: time="2026-01-28T01:31:43.522818459Z" level=info msg="RemoveContainer for \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\"" Jan 28 01:31:43.530050 containerd[1588]: time="2026-01-28T01:31:43.529528640Z" level=info msg="RemoveContainer for \"e5a128636c526d1b13ff19aa63cbcdb14acfddc349da3975c4d68382640e8149\" returns successfully" Jan 28 01:31:43.530160 kubelet[2843]: I0128 01:31:43.530083 2843 scope.go:117] "RemoveContainer" containerID="a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830" Jan 28 01:31:43.535459 containerd[1588]: time="2026-01-28T01:31:43.535334081Z" level=info msg="RemoveContainer for \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\"" Jan 28 01:31:43.574052 containerd[1588]: time="2026-01-28T01:31:43.574000088Z" level=info msg="RemoveContainer for \"a6e16ce21099b6cae2b3146892808a5448ed92a7abc76dac5a82712ab75f6830\" returns successfully" Jan 28 01:31:43.580636 kubelet[2843]: I0128 01:31:43.574903 2843 scope.go:117] "RemoveContainer" containerID="e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57" Jan 28 01:31:43.585594 containerd[1588]: time="2026-01-28T01:31:43.584213199Z" level=info msg="RemoveContainer for \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\"" Jan 28 01:31:43.613127 containerd[1588]: time="2026-01-28T01:31:43.612512779Z" level=info msg="RemoveContainer for \"e301a7571f70330035699f21fa78fa7970ee1262383eac9e449a453023972a57\" returns successfully" Jan 28 01:31:43.708192 kubelet[2843]: E0128 01:31:43.707960 2843 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:31:44.150567 sshd[5895]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:44.174721 systemd[1]: Started sshd@88-10.0.0.61:22-10.0.0.1:55832.service - OpenSSH per-connection server daemon (10.0.0.1:55832). Jan 28 01:31:44.176503 systemd[1]: sshd@87-10.0.0.61:22-10.0.0.1:39608.service: Deactivated successfully. Jan 28 01:31:44.184520 systemd[1]: session-88.scope: Deactivated successfully. Jan 28 01:31:44.222591 systemd-logind[1562]: Session 88 logged out. Waiting for processes to exit. Jan 28 01:31:44.226736 systemd-logind[1562]: Removed session 88. Jan 28 01:31:44.273203 kubelet[2843]: I0128 01:31:44.273160 2843 memory_manager.go:355] "RemoveStaleState removing state" podUID="31ca98c6-7a43-4faf-b3a4-959c7403a471" containerName="cilium-agent" Jan 28 01:31:44.277677 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 55832 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:44.305476 kubelet[2843]: I0128 01:31:44.277056 2843 memory_manager.go:355] "RemoveStaleState removing state" podUID="e781658d-a892-42dd-85d0-ba5cd2e8e187" containerName="cilium-operator" Jan 28 01:31:44.291596 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:44.416179 systemd-logind[1562]: New session 89 of user core. Jan 28 01:31:44.435604 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 28 01:31:44.461355 kubelet[2843]: I0128 01:31:44.461094 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ca98c6-7a43-4faf-b3a4-959c7403a471" path="/var/lib/kubelet/pods/31ca98c6-7a43-4faf-b3a4-959c7403a471/volumes" Jan 28 01:31:44.468801 kubelet[2843]: I0128 01:31:44.466963 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e781658d-a892-42dd-85d0-ba5cd2e8e187" path="/var/lib/kubelet/pods/e781658d-a892-42dd-85d0-ba5cd2e8e187/volumes" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493312 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-bpf-maps\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493383 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-hostproc\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493409 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-host-proc-sys-net\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493437 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-cni-path\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493460 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-lib-modules\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.496322 kubelet[2843]: I0128 01:31:44.493481 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71ee0c4a-c230-4d55-b376-50d419d89909-hubble-tls\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500440 kubelet[2843]: I0128 01:31:44.493503 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-cilium-run\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500440 kubelet[2843]: I0128 01:31:44.493528 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71ee0c4a-c230-4d55-b376-50d419d89909-cilium-config-path\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500440 kubelet[2843]: I0128 01:31:44.493554 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-host-proc-sys-kernel\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500440 kubelet[2843]: I0128 01:31:44.493575 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71ee0c4a-c230-4d55-b376-50d419d89909-cilium-ipsec-secrets\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500440 kubelet[2843]: I0128 01:31:44.493688 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-cilium-cgroup\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500624 kubelet[2843]: I0128 01:31:44.493732 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71ee0c4a-c230-4d55-b376-50d419d89909-clustermesh-secrets\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500624 kubelet[2843]: I0128 01:31:44.493757 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9l7\" (UniqueName: \"kubernetes.io/projected/71ee0c4a-c230-4d55-b376-50d419d89909-kube-api-access-fg9l7\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500624 kubelet[2843]: I0128 01:31:44.493778 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-etc-cni-netd\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.500624 kubelet[2843]: I0128 01:31:44.493797 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71ee0c4a-c230-4d55-b376-50d419d89909-xtables-lock\") pod \"cilium-zcxmh\" (UID: \"71ee0c4a-c230-4d55-b376-50d419d89909\") " pod="kube-system/cilium-zcxmh" Jan 28 01:31:44.580804 sshd[5959]: pam_unix(sshd:session): session closed for user core Jan 28 01:31:44.640676 systemd[1]: Started sshd@89-10.0.0.61:22-10.0.0.1:55842.service - OpenSSH per-connection server daemon (10.0.0.1:55842). Jan 28 01:31:44.678422 systemd[1]: sshd@88-10.0.0.61:22-10.0.0.1:55832.service: Deactivated successfully. Jan 28 01:31:44.732456 systemd[1]: session-89.scope: Deactivated successfully. Jan 28 01:31:44.739565 systemd-logind[1562]: Session 89 logged out. Waiting for processes to exit. Jan 28 01:31:44.763044 kubelet[2843]: E0128 01:31:44.749222 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:44.765310 containerd[1588]: time="2026-01-28T01:31:44.764802774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcxmh,Uid:71ee0c4a-c230-4d55-b376-50d419d89909,Namespace:kube-system,Attempt:0,}" Jan 28 01:31:44.772637 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 55842 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:31:44.779740 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:31:44.783219 systemd-logind[1562]: Removed session 89. Jan 28 01:31:45.324827 systemd-logind[1562]: New session 90 of user core. Jan 28 01:31:45.398708 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 28 01:31:46.667164 containerd[1588]: time="2026-01-28T01:31:46.640418002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:31:46.667164 containerd[1588]: time="2026-01-28T01:31:46.640503462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:31:46.667164 containerd[1588]: time="2026-01-28T01:31:46.640521135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:46.667164 containerd[1588]: time="2026-01-28T01:31:46.640670463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:31:46.822008 systemd[1]: run-containerd-runc-k8s.io-20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437-runc.M5DU2h.mount: Deactivated successfully. Jan 28 01:31:47.202015 containerd[1588]: time="2026-01-28T01:31:47.200465739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcxmh,Uid:71ee0c4a-c230-4d55-b376-50d419d89909,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\"" Jan 28 01:31:47.211709 kubelet[2843]: E0128 01:31:47.211455 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:47.259101 containerd[1588]: time="2026-01-28T01:31:47.237341513Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 28 01:31:47.273979 kubelet[2843]: E0128 01:31:47.273841 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:47.514002 containerd[1588]: time="2026-01-28T01:31:47.513466401Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610\"" Jan 28 01:31:47.517493 containerd[1588]: time="2026-01-28T01:31:47.514427891Z" level=info msg="StartContainer for \"c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610\"" Jan 28 01:31:48.020785 containerd[1588]: time="2026-01-28T01:31:48.020500828Z" level=info msg="StartContainer for \"c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610\" returns successfully" Jan 28 01:31:48.087016 kubelet[2843]: E0128 01:31:48.085706 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:48.456164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610-rootfs.mount: Deactivated successfully. Jan 28 01:31:48.564451 containerd[1588]: time="2026-01-28T01:31:48.561388053Z" level=info msg="shim disconnected" id=c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610 namespace=k8s.io Jan 28 01:31:48.564451 containerd[1588]: time="2026-01-28T01:31:48.561452774Z" level=warning msg="cleaning up after shim disconnected" id=c2d16db835a4fef8a0bddd721a11dd05b51353acc4d0ef9f2f413d535fdd4610 namespace=k8s.io Jan 28 01:31:48.564451 containerd[1588]: time="2026-01-28T01:31:48.561464496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:48.714841 kubelet[2843]: E0128 01:31:48.714598 2843 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:31:48.756203 containerd[1588]: time="2026-01-28T01:31:48.755194732Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:31:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:31:49.127067 kubelet[2843]: E0128 01:31:49.126933 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:49.174739 containerd[1588]: time="2026-01-28T01:31:49.174689529Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 28 01:31:49.286720 kubelet[2843]: E0128 01:31:49.275170 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:49.503721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141413367.mount: Deactivated successfully. Jan 28 01:31:49.549802 containerd[1588]: time="2026-01-28T01:31:49.549432559Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828\"" Jan 28 01:31:49.552179 containerd[1588]: time="2026-01-28T01:31:49.551748753Z" level=info msg="StartContainer for \"9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828\"" Jan 28 01:31:50.232674 containerd[1588]: time="2026-01-28T01:31:50.232492782Z" level=info msg="StartContainer for \"9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828\" returns successfully" Jan 28 01:31:50.407946 systemd[1]: run-containerd-runc-k8s.io-9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828-runc.aj4D8f.mount: Deactivated successfully. Jan 28 01:31:50.720964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828-rootfs.mount: Deactivated successfully. Jan 28 01:31:50.766048 containerd[1588]: time="2026-01-28T01:31:50.763660346Z" level=info msg="shim disconnected" id=9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828 namespace=k8s.io Jan 28 01:31:50.766048 containerd[1588]: time="2026-01-28T01:31:50.763736248Z" level=warning msg="cleaning up after shim disconnected" id=9e70aab2e148e34a3cc62c9a20411da26f2fc59152d3903fa7701a1e9a234828 namespace=k8s.io Jan 28 01:31:50.766048 containerd[1588]: time="2026-01-28T01:31:50.763751737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:50.810470 containerd[1588]: time="2026-01-28T01:31:50.809661926Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:31:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:31:51.232096 kubelet[2843]: E0128 01:31:51.231552 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:51.254983 containerd[1588]: time="2026-01-28T01:31:51.252969912Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 28 01:31:51.285957 kubelet[2843]: E0128 01:31:51.283064 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:51.442652 containerd[1588]: time="2026-01-28T01:31:51.442600157Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871\"" Jan 28 01:31:51.453327 containerd[1588]: time="2026-01-28T01:31:51.453108596Z" level=info msg="StartContainer for \"2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871\"" Jan 28 01:31:51.559391 kubelet[2843]: I0128 01:31:51.558802 2843 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T01:31:51Z","lastTransitionTime":"2026-01-28T01:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 28 01:31:51.589646 systemd[1]: run-containerd-runc-k8s.io-2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871-runc.h99Upk.mount: Deactivated successfully. Jan 28 01:31:51.864811 containerd[1588]: time="2026-01-28T01:31:51.853151812Z" level=info msg="StartContainer for \"2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871\" returns successfully" Jan 28 01:31:51.976154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871-rootfs.mount: Deactivated successfully. Jan 28 01:31:51.996144 containerd[1588]: time="2026-01-28T01:31:51.995207069Z" level=info msg="shim disconnected" id=2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871 namespace=k8s.io Jan 28 01:31:51.996144 containerd[1588]: time="2026-01-28T01:31:51.995348645Z" level=warning msg="cleaning up after shim disconnected" id=2f6431bd8de3eb0bf519fcf4f399b114adfd66b458bebe1b7ecdeaa99c17f871 namespace=k8s.io Jan 28 01:31:51.996144 containerd[1588]: time="2026-01-28T01:31:51.995364324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:52.255402 kubelet[2843]: E0128 01:31:52.251416 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:52.277934 containerd[1588]: time="2026-01-28T01:31:52.276856195Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 28 01:31:52.399609 containerd[1588]: time="2026-01-28T01:31:52.399532832Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673\"" Jan 28 01:31:52.411310 containerd[1588]: time="2026-01-28T01:31:52.408956575Z" level=info msg="StartContainer for \"62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673\"" Jan 28 01:31:52.894489 containerd[1588]: time="2026-01-28T01:31:52.892508625Z" level=info msg="StartContainer for \"62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673\" returns successfully" Jan 28 01:31:53.097955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673-rootfs.mount: Deactivated successfully. Jan 28 01:31:53.165466 containerd[1588]: time="2026-01-28T01:31:53.164064862Z" level=info msg="shim disconnected" id=62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673 namespace=k8s.io Jan 28 01:31:53.165466 containerd[1588]: time="2026-01-28T01:31:53.164150021Z" level=warning msg="cleaning up after shim disconnected" id=62b0c01ff7c673b9366f521af583ec802b44efc885d8a5b657adcf879d54d673 namespace=k8s.io Jan 28 01:31:53.165466 containerd[1588]: time="2026-01-28T01:31:53.164173304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:31:53.315333 kubelet[2843]: E0128 01:31:53.313078 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:53.436574 kubelet[2843]: E0128 01:31:53.433858 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:53.456695 containerd[1588]: time="2026-01-28T01:31:53.456642953Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 28 01:31:53.672165 containerd[1588]: time="2026-01-28T01:31:53.670543746Z" level=info msg="CreateContainer within sandbox \"20f396f4e2550d97de9872c6bac53a33a59a4624118adb0a7c1172e4e92b6437\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5836a1c82f80e8b5edb6105265a4ecf2c2d09dee0958f4168948a90ccf9fe71d\"" Jan 28 01:31:53.678777 containerd[1588]: time="2026-01-28T01:31:53.678349823Z" level=info msg="StartContainer for \"5836a1c82f80e8b5edb6105265a4ecf2c2d09dee0958f4168948a90ccf9fe71d\"" Jan 28 01:31:53.740647 kubelet[2843]: E0128 01:31:53.736405 2843 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 28 01:31:54.211188 containerd[1588]: time="2026-01-28T01:31:54.208763837Z" level=info msg="StartContainer for \"5836a1c82f80e8b5edb6105265a4ecf2c2d09dee0958f4168948a90ccf9fe71d\" returns successfully" Jan 28 01:31:54.319403 kubelet[2843]: E0128 01:31:54.291558 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:55.286684 kubelet[2843]: E0128 01:31:55.285646 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:55.842189 kubelet[2843]: E0128 01:31:55.839572 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:55.905622 kubelet[2843]: I0128 01:31:55.904607 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zcxmh" podStartSLOduration=11.904357738 podStartE2EDuration="11.904357738s" podCreationTimestamp="2026-01-28 01:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:31:55.893981962 +0000 UTC m=+756.616556048" watchObservedRunningTime="2026-01-28 01:31:55.904357738 +0000 UTC m=+756.626931824" Jan 28 01:31:56.291057 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 28 01:31:56.842092 kubelet[2843]: E0128 01:31:56.841881 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:31:57.277084 kubelet[2843]: E0128 01:31:57.276995 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:31:58.305148 kubelet[2843]: E0128 01:31:58.303772 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-lz2bs" podUID="0a20d75c-c2d1-44ab-9ab3-72c258e5ca84" Jan 28 01:32:00.275420 kubelet[2843]: E0128 01:32:00.275374 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:02.822596 systemd[1]: run-containerd-runc-k8s.io-5836a1c82f80e8b5edb6105265a4ecf2c2d09dee0958f4168948a90ccf9fe71d-runc.NsY8Dc.mount: Deactivated successfully. Jan 28 01:32:13.315188 systemd-networkd[1254]: lxc_health: Link UP Jan 28 01:32:13.420488 systemd-networkd[1254]: lxc_health: Gained carrier Jan 28 01:32:14.796217 kubelet[2843]: E0128 01:32:14.771391 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:15.393311 kubelet[2843]: E0128 01:32:15.391372 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:15.496202 systemd-networkd[1254]: lxc_health: Gained IPv6LL Jan 28 01:32:16.392959 kubelet[2843]: E0128 01:32:16.392923 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:23.166701 containerd[1588]: time="2026-01-28T01:32:23.166402465Z" level=info msg="StopPodSandbox for \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\"" Jan 28 01:32:23.166701 containerd[1588]: time="2026-01-28T01:32:23.166569727Z" level=info msg="TearDown network for sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" successfully" Jan 28 01:32:23.166701 containerd[1588]: time="2026-01-28T01:32:23.166598472Z" level=info msg="StopPodSandbox for \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" returns successfully" Jan 28 01:32:23.182722 containerd[1588]: time="2026-01-28T01:32:23.175910246Z" level=info msg="RemovePodSandbox for \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\"" Jan 28 01:32:23.182722 containerd[1588]: time="2026-01-28T01:32:23.175966461Z" level=info msg="Forcibly stopping sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\"" Jan 28 01:32:23.182722 containerd[1588]: time="2026-01-28T01:32:23.176067399Z" level=info msg="TearDown network for sandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" successfully" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.231394871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.231489338Z" level=info msg="RemovePodSandbox \"367181bb59a0522beeec0f09023354babacbf5e4a56d04f770bbf0a932b5b08b\" returns successfully" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.232092356Z" level=info msg="StopPodSandbox for \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\"" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.232359917Z" level=info msg="TearDown network for sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" successfully" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.232381908Z" level=info msg="StopPodSandbox for \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" returns successfully" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.233506984Z" level=info msg="RemovePodSandbox for \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\"" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.233536909Z" level=info msg="Forcibly stopping sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\"" Jan 28 01:32:23.237931 containerd[1588]: time="2026-01-28T01:32:23.233618060Z" level=info msg="TearDown network for sandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" successfully" Jan 28 01:32:23.271397 containerd[1588]: time="2026-01-28T01:32:23.270948626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:32:23.271397 containerd[1588]: time="2026-01-28T01:32:23.271044285Z" level=info msg="RemovePodSandbox \"2d4e97a8e91088325124a092aeb80171101645f0a2b348dbcf08be46e15fbbbe\" returns successfully" Jan 28 01:32:23.274712 kubelet[2843]: E0128 01:32:23.273909 2843 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:32:25.700885 sshd[5968]: pam_unix(sshd:session): session closed for user core Jan 28 01:32:25.716694 systemd[1]: sshd@89-10.0.0.61:22-10.0.0.1:55842.service: Deactivated successfully. Jan 28 01:32:25.721434 systemd-logind[1562]: Session 90 logged out. Waiting for processes to exit. Jan 28 01:32:25.724505 systemd[1]: session-90.scope: Deactivated successfully. Jan 28 01:32:25.741882 systemd-logind[1562]: Removed session 90.