Jan 28 01:51:48.692299 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 01:51:48.692389 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:51:48.692410 kernel: BIOS-provided physical RAM map: Jan 28 01:51:48.692421 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:51:48.692431 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:51:48.692439 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:51:48.692533 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:51:48.692543 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:51:48.692554 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 28 01:51:48.692563 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 28 01:51:48.692579 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 28 01:51:48.692589 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 28 01:51:48.692640 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 28 01:51:48.692651 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 28 01:51:48.692698 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 28 01:51:48.692709 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:51:48.692725 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 28 01:51:48.692737 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 28 01:51:48.692747 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:51:48.692759 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 01:51:48.692768 kernel: NX (Execute Disable) protection: active Jan 28 01:51:48.692780 kernel: APIC: Static calls initialized Jan 28 01:51:48.692790 kernel: efi: EFI v2.7 by EDK II Jan 28 01:51:48.692801 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 28 01:51:48.692812 kernel: SMBIOS 2.8 present. Jan 28 01:51:48.692822 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 28 01:51:48.692834 kernel: Hypervisor detected: KVM Jan 28 01:51:48.692848 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:51:48.692860 kernel: kvm-clock: using sched offset of 48738092073 cycles Jan 28 01:51:48.692870 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:51:48.692881 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:51:48.692891 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:51:48.692905 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:51:48.692915 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 28 01:51:48.692927 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:51:48.692938 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:51:48.692953 kernel: Using GB pages for direct mapping Jan 28 01:51:48.692966 kernel: Secure boot disabled Jan 28 01:51:48.692975 kernel: ACPI: Early table checksum verification disabled Jan 28 01:51:48.692988 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:51:48.693004 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:51:48.693016 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693027 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693043 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:51:48.693055 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693104 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693116 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693129 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:51:48.693140 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:51:48.693151 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:51:48.693167 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:51:48.693179 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:51:48.693190 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:51:48.693202 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:51:48.693214 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:51:48.693226 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:51:48.693236 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:51:48.693249 kernel: No NUMA configuration found Jan 28 01:51:48.693297 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 28 01:51:48.693315 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 28 01:51:48.693373 kernel: Zone ranges: Jan 28 01:51:48.693387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:51:48.693398 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 28 01:51:48.693409 kernel: Normal empty Jan 28 01:51:48.693421 kernel: Movable zone start for each node Jan 28 01:51:48.693432 kernel: Early memory node ranges Jan 28 01:51:48.693524 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:51:48.693537 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:51:48.693555 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:51:48.693567 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 28 01:51:48.693579 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 28 01:51:48.693591 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 28 01:51:48.693638 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 28 01:51:48.693650 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:51:48.693662 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:51:48.693671 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:51:48.693683 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:51:48.693693 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 28 01:51:48.693712 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 28 01:51:48.693723 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 28 01:51:48.693735 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:51:48.693745 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:51:48.693757 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:51:48.693768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:51:48.693780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:51:48.693792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:51:48.693807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:51:48.693820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:51:48.693830 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:51:48.693843 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:51:48.693853 kernel: TSC deadline timer available Jan 28 01:51:48.693865 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 01:51:48.693875 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:51:48.693888 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:51:48.693899 kernel: kvm-guest: setup PV sched yield Jan 28 01:51:48.693911 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 28 01:51:48.693927 kernel: Booting paravirtualized kernel on KVM Jan 28 01:51:48.693938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:51:48.693951 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:51:48.693962 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 01:51:48.693974 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 01:51:48.693984 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:51:48.693996 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:51:48.694006 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:51:48.694020 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:51:48.694080 kernel: random: crng init done Jan 28 01:51:48.694094 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:51:48.694104 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:51:48.694115 kernel: Fallback order for Node 0: 0 Jan 28 01:51:48.694127 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 28 01:51:48.694139 kernel: Policy zone: DMA32 Jan 28 01:51:48.694150 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:51:48.694161 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 28 01:51:48.694176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:51:48.694187 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 01:51:48.694199 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 01:51:48.694208 kernel: Dynamic Preempt: voluntary Jan 28 01:51:48.694221 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:51:48.694248 kernel: rcu: RCU event tracing is enabled. Jan 28 01:51:48.694265 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:51:48.694278 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:51:48.694289 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:51:48.694303 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:51:48.694314 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:51:48.694378 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:51:48.694397 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:51:48.694408 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:51:48.694421 kernel: Console: colour dummy device 80x25 Jan 28 01:51:48.694432 kernel: printk: console [ttyS0] enabled Jan 28 01:51:48.694561 kernel: ACPI: Core revision 20230628 Jan 28 01:51:48.694575 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:51:48.694589 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:51:48.694600 kernel: x2apic enabled Jan 28 01:51:48.694611 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:51:48.694624 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:51:48.694635 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:51:48.694648 kernel: kvm-guest: setup PV IPIs Jan 28 01:51:48.694659 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:51:48.694677 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 01:51:48.694690 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:51:48.694701 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:51:48.694713 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:51:48.694725 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:51:48.694738 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:51:48.694750 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:51:48.694761 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:51:48.694775 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:51:48.694791 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:51:48.694804 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:51:48.694817 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:51:48.694829 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:51:48.694841 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:51:48.694894 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:51:48.694906 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:51:48.694921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:51:48.694932 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:51:48.694949 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:51:48.694962 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:51:48.694973 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:51:48.694986 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:51:48.694998 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 01:51:48.695009 kernel: landlock: Up and running. Jan 28 01:51:48.695022 kernel: SELinux: Initializing. Jan 28 01:51:48.695034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:51:48.695047 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:51:48.695063 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:51:48.695076 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:51:48.695087 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:51:48.695100 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:51:48.695111 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:51:48.695124 kernel: signal: max sigframe size: 1776 Jan 28 01:51:48.695136 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:51:48.695149 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:51:48.695166 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:51:48.695178 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:51:48.695190 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:51:48.695202 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:51:48.695213 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:51:48.695225 kernel: smpboot: Max logical packages: 1 Jan 28 01:51:48.695238 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:51:48.695248 kernel: devtmpfs: initialized Jan 28 01:51:48.695261 kernel: x86/mm: Memory block size: 128MB Jan 28 01:51:48.695273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:51:48.695291 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:51:48.695302 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 28 01:51:48.695313 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:51:48.695324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:51:48.695387 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:51:48.695398 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:51:48.695411 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:51:48.695421 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:51:48.695439 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:51:48.695532 kernel: audit: type=2000 audit(1769565088.367:1): state=initialized audit_enabled=0 res=1 Jan 28 01:51:48.695545 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:51:48.695556 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:51:48.695568 kernel: cpuidle: using governor menu Jan 28 01:51:48.695580 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:51:48.695592 kernel: dca service started, version 1.12.1 Jan 28 01:51:48.695604 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 01:51:48.695616 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 01:51:48.695634 kernel: PCI: Using configuration type 1 for base access Jan 28 01:51:48.695647 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:51:48.695659 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:51:48.695672 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:51:48.695683 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:51:48.695695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:51:48.695707 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:51:48.695719 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:51:48.695732 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:51:48.695749 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:51:48.695760 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 01:51:48.695774 kernel: ACPI: Interpreter enabled Jan 28 01:51:48.695785 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:51:48.695798 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:51:48.695810 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:51:48.695822 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:51:48.695835 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:51:48.695846 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:51:48.696883 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:51:48.697107 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:51:48.697319 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:51:48.697390 kernel: PCI host bridge to bus 0000:00 Jan 28 01:51:48.697765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:51:48.697933 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:51:48.698102 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:51:48.698278 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 01:51:48.698620 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 01:51:48.698822 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 28 01:51:48.699070 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:51:48.700259 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 01:51:48.700691 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 01:51:48.700918 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:51:48.701125 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 28 01:51:48.701379 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 28 01:51:48.701697 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 28 01:51:48.701923 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:51:48.702168 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 01:51:48.702529 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 28 01:51:48.702767 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 28 01:51:48.702993 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 28 01:51:48.705177 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 01:51:48.705557 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 28 01:51:48.705984 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 28 01:51:48.706226 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 28 01:51:48.706846 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 01:51:48.707081 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 28 01:51:48.707307 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 28 01:51:48.707817 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 28 01:51:48.708042 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 28 01:51:48.708285 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 01:51:48.709823 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:51:48.714312 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 01:51:48.714987 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 28 01:51:48.715168 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 28 01:51:48.715878 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 01:51:48.716064 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 28 01:51:48.716079 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:51:48.716090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:51:48.716108 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:51:48.716120 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:51:48.716129 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:51:48.716139 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:51:48.716148 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:51:48.716158 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:51:48.716168 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:51:48.716177 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:51:48.716190 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:51:48.716206 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:51:48.716216 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:51:48.716225 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:51:48.716235 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:51:48.716244 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:51:48.716253 kernel: iommu: Default domain type: Translated Jan 28 01:51:48.716263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:51:48.716273 kernel: efivars: Registered efivars operations Jan 28 01:51:48.716283 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:51:48.716298 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:51:48.716308 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:51:48.716320 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 28 01:51:48.716880 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 28 01:51:48.716892 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 28 01:51:48.717087 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:51:48.717272 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:51:48.718232 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:51:48.718249 kernel: vgaarb: loaded Jan 28 01:51:48.718267 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:51:48.718279 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:51:48.718289 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:51:48.718299 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:51:48.718311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:51:48.718324 kernel: pnp: PnP ACPI init Jan 28 01:51:48.719060 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 01:51:48.719081 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:51:48.719100 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:51:48.719112 kernel: NET: Registered PF_INET protocol family Jan 28 01:51:48.719122 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:51:48.719133 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:51:48.719146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:51:48.719160 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:51:48.719170 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:51:48.719180 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:51:48.719190 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:51:48.719206 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:51:48.719218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:51:48.719228 kernel: NET: Registered PF_XDP protocol family Jan 28 01:51:48.720839 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 28 01:51:48.721110 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 28 01:51:48.721323 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:51:48.722053 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:51:48.722230 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:51:48.722541 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 01:51:48.722709 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 01:51:48.722871 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 28 01:51:48.722885 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:51:48.722896 kernel: Initialise system trusted keyrings Jan 28 01:51:48.722907 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:51:48.722918 kernel: Key type asymmetric registered Jan 28 01:51:48.722928 kernel: Asymmetric key parser 'x509' registered Jan 28 01:51:48.722938 kernel: hrtimer: interrupt took 8912138 ns Jan 28 01:51:48.722955 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 01:51:48.722966 kernel: io scheduler mq-deadline registered Jan 28 01:51:48.722977 kernel: io scheduler kyber registered Jan 28 01:51:48.722987 kernel: io scheduler bfq registered Jan 28 01:51:48.722997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:51:48.723008 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:51:48.723018 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:51:48.723029 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:51:48.723039 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:51:48.723054 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:51:48.723065 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:51:48.723076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:51:48.723086 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:51:48.723619 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:51:48.723642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:51:48.723847 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:51:48.724680 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:51:43 UTC (1769565103) Jan 28 01:51:48.724957 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 01:51:48.724979 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:51:48.724993 kernel: efifb: probing for efifb Jan 28 01:51:48.725006 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 28 01:51:48.725019 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 28 01:51:48.725031 kernel: efifb: scrolling: redraw Jan 28 01:51:48.725044 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 28 01:51:48.725056 kernel: Console: switching to colour frame buffer device 100x37 Jan 28 01:51:48.725069 kernel: fb0: EFI VGA frame buffer device Jan 28 01:51:48.725089 kernel: pstore: Using crash dump compression: deflate Jan 28 01:51:48.725102 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:51:48.725114 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:51:48.725127 kernel: Segment Routing with IPv6 Jan 28 01:51:48.725139 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:51:48.725151 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:51:48.725164 kernel: Key type dns_resolver registered Jan 28 01:51:48.725208 kernel: IPI shorthand broadcast: enabled Jan 28 01:51:48.725225 kernel: sched_clock: Marking stable (12752039073, 2049478553)->(18144547653, -3343030027) Jan 28 01:51:48.725243 kernel: registered taskstats version 1 Jan 28 01:51:48.725256 kernel: Loading compiled-in X.509 certificates Jan 28 01:51:48.725268 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 01:51:48.725282 kernel: Key type .fscrypt registered Jan 28 01:51:48.725293 kernel: Key type fscrypt-provisioning registered Jan 28 01:51:48.725308 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:51:48.725320 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:51:48.725390 kernel: ima: No architecture policies found Jan 28 01:51:48.725410 kernel: clk: Disabling unused clocks Jan 28 01:51:48.725422 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 01:51:48.725436 kernel: Write protecting the kernel read-only data: 36864k Jan 28 01:51:48.725533 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 01:51:48.725546 kernel: Run /init as init process Jan 28 01:51:48.725559 kernel: with arguments: Jan 28 01:51:48.725572 kernel: /init Jan 28 01:51:48.725584 kernel: with environment: Jan 28 01:51:48.725598 kernel: HOME=/ Jan 28 01:51:48.725610 kernel: TERM=linux Jan 28 01:51:48.725772 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:51:48.725792 systemd[1]: Detected virtualization kvm. Jan 28 01:51:48.725806 systemd[1]: Detected architecture x86-64. Jan 28 01:51:48.725818 systemd[1]: Running in initrd. Jan 28 01:51:48.725831 systemd[1]: No hostname configured, using default hostname. Jan 28 01:51:48.725843 systemd[1]: Hostname set to . Jan 28 01:51:48.725862 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:51:48.725874 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:51:48.725887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:51:48.725900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:51:48.725913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:51:48.725926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:51:48.725939 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:51:48.725957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:51:48.725972 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:51:48.725985 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:51:48.725999 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:51:48.726010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:51:48.726030 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:51:48.726042 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:51:48.726056 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:51:48.726069 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:51:48.726084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:51:48.726096 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:51:48.726111 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:51:48.726124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 01:51:48.726138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:51:48.726158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:51:48.726171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:51:48.726184 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:51:48.726198 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:51:48.726211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:51:48.726225 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:51:48.726238 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:51:48.726252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:51:48.726271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:51:48.726284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:51:48.726299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:51:48.726401 systemd-journald[194]: Collecting audit messages is disabled. Jan 28 01:51:48.726525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:51:48.726542 systemd-journald[194]: Journal started Jan 28 01:51:48.726570 systemd-journald[194]: Runtime Journal (/run/log/journal/c21178a75afd47fb9afbd38fa7d48843) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:51:48.768532 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:51:48.785150 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:51:48.825722 systemd-modules-load[195]: Inserted module 'overlay' Jan 28 01:51:48.880549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:51:48.992251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:51:49.052297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:51:49.161671 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:51:49.256318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:51:49.270235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:51:49.538065 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:51:49.659663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:51:49.796599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:51:49.887739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:51:50.169006 dracut-cmdline[225]: dracut-dracut-053 Jan 28 01:51:50.195928 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 01:51:50.508224 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:51:50.579116 kernel: Bridge firewalling registered Jan 28 01:51:50.592794 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 28 01:51:50.594642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:51:50.740833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:51:50.856812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:51:50.981016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:51:51.266528 kernel: SCSI subsystem initialized Jan 28 01:51:51.334323 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:51:51.474903 systemd-resolved[306]: Positive Trust Anchors: Jan 28 01:51:51.475202 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:51:51.475254 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:51:51.708867 kernel: iscsi: registered transport (tcp) Jan 28 01:51:51.710226 systemd-resolved[306]: Defaulting to hostname 'linux'. Jan 28 01:51:51.733730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:51:51.776858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:51:51.931023 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:51:51.931302 kernel: QLogic iSCSI HBA Driver Jan 28 01:51:52.394057 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:51:52.460840 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:51:52.705096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:51:52.705307 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:51:52.733862 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 01:51:52.993577 kernel: raid6: avx2x4 gen() 19464 MB/s Jan 28 01:51:53.045934 kernel: raid6: avx2x2 gen() 15329 MB/s Jan 28 01:51:53.075682 kernel: raid6: avx2x1 gen() 8162 MB/s Jan 28 01:51:53.075811 kernel: raid6: using algorithm avx2x4 gen() 19464 MB/s Jan 28 01:51:53.135250 kernel: raid6: .... xor() 1729 MB/s, rmw enabled Jan 28 01:51:53.141177 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:51:53.233805 kernel: xor: automatically using best checksumming function avx Jan 28 01:51:54.478094 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:51:54.560718 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:51:54.629158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:51:54.697889 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 28 01:51:54.746662 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:51:54.846754 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:51:55.040309 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 28 01:51:55.412018 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:51:55.526739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:51:56.284064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:51:56.388316 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:51:56.793636 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:51:56.997759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:51:57.118815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:51:57.145776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:51:57.152995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:51:57.153356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:51:57.161723 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:51:57.278997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:51:57.408751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:51:57.409108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:51:57.471107 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:51:57.504344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:51:57.538948 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:51:57.596090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:51:57.599073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:51:57.710031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:51:57.835134 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:51:57.835229 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:51:57.948154 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 01:51:57.961720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:51:58.058202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:51:58.058238 kernel: GPT:9289727 != 19775487 Jan 28 01:51:58.058264 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:51:58.058279 kernel: GPT:9289727 != 19775487 Jan 28 01:51:58.058292 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:51:58.058306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:51:58.067809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:51:58.207659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:51:58.509252 kernel: libata version 3.00 loaded. Jan 28 01:51:58.782541 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (467) Jan 28 01:51:58.858740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:51:58.974973 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jan 28 01:51:58.991226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:51:59.066596 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:51:59.102747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 01:51:59.192353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:51:59.297670 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 01:51:59.365809 kernel: AES CTR mode by8 optimization enabled Jan 28 01:51:59.365880 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:51:59.366337 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:51:59.377605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:51:59.544633 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 01:51:59.550907 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:51:59.551192 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:51:59.551644 disk-uuid[521]: Primary Header is updated. Jan 28 01:51:59.551644 disk-uuid[521]: Secondary Entries is updated. Jan 28 01:51:59.551644 disk-uuid[521]: Secondary Header is updated. Jan 28 01:51:59.679955 kernel: scsi host0: ahci Jan 28 01:51:59.680537 kernel: scsi host1: ahci Jan 28 01:51:59.680806 kernel: scsi host2: ahci Jan 28 01:51:59.681050 kernel: scsi host3: ahci Jan 28 01:51:59.705204 kernel: scsi host4: ahci Jan 28 01:51:59.739671 kernel: scsi host5: ahci Jan 28 01:51:59.776825 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 28 01:51:59.776887 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 28 01:51:59.776906 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 28 01:51:59.789708 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 28 01:51:59.800698 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 28 01:51:59.822923 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 28 01:52:00.180660 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:52:00.180729 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:52:00.186000 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:52:00.199611 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:52:00.209578 kernel: ata3.00: applying bridge limits Jan 28 01:52:00.248081 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:52:00.248547 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:52:00.248567 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:52:00.262377 kernel: ata3.00: configured for UDMA/100 Jan 28 01:52:00.274768 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:52:00.640839 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:52:00.641336 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:52:00.664850 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:52:00.768286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:52:00.768353 disk-uuid[523]: The operation has completed successfully. Jan 28 01:52:01.451967 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:52:01.452234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:52:01.513364 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:52:01.577995 sh[600]: Success Jan 28 01:52:01.775309 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 01:52:01.940797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:52:01.979809 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:52:02.015123 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:52:02.096610 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 01:52:02.096700 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:52:02.105578 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 01:52:02.113816 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:52:02.113882 kernel: BTRFS info (device dm-0): using free space tree Jan 28 01:52:02.246361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:52:02.296039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:52:02.343038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:52:02.431705 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:52:02.543637 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:52:02.543729 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:52:02.562590 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:52:02.703917 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:52:02.848897 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 01:52:02.879688 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:52:02.940283 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:52:02.979747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:52:03.702959 ignition[714]: Ignition 2.19.0 Jan 28 01:52:03.702974 ignition[714]: Stage: fetch-offline Jan 28 01:52:03.703101 ignition[714]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:03.703119 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:03.756096 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:52:03.703333 ignition[714]: parsed url from cmdline: "" Jan 28 01:52:03.888185 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:52:03.703339 ignition[714]: no config URL provided Jan 28 01:52:03.703349 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:52:03.703365 ignition[714]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:52:03.703577 ignition[714]: op(1): [started] loading QEMU firmware config module Jan 28 01:52:03.703592 ignition[714]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:52:03.865762 ignition[714]: op(1): [finished] loading QEMU firmware config module Jan 28 01:52:04.190367 systemd-networkd[787]: lo: Link UP Jan 28 01:52:04.194413 systemd-networkd[787]: lo: Gained carrier Jan 28 01:52:04.199175 ignition[714]: parsing config with SHA512: bc27f5bc5d632bf1358ac1be467a44ed522bf03f02f400ed018fae3eb01cb0578ab153461f8e1c26d75dc341dad46eba08b600d80ed4252669d7d03cb38c00e2 Jan 28 01:52:04.214390 systemd-networkd[787]: Enumeration completed Jan 28 01:52:04.232997 ignition[714]: fetch-offline: fetch-offline passed Jan 28 01:52:04.217012 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:52:04.233161 ignition[714]: Ignition finished successfully Jan 28 01:52:04.231015 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:52:04.231023 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:52:04.232025 unknown[714]: fetched base config from "system" Jan 28 01:52:04.232037 unknown[714]: fetched user config from "qemu" Jan 28 01:52:04.251816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:52:04.252905 systemd-networkd[787]: eth0: Link UP Jan 28 01:52:04.252912 systemd-networkd[787]: eth0: Gained carrier Jan 28 01:52:04.252931 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:52:04.264890 systemd[1]: Reached target network.target - Network. Jan 28 01:52:04.568066 ignition[791]: Ignition 2.19.0 Jan 28 01:52:04.264992 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:52:04.568142 ignition[791]: Stage: kargs Jan 28 01:52:04.343767 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:52:04.568928 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:04.447378 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:52:04.568948 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:04.607906 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:52:04.577120 ignition[791]: kargs: kargs passed Jan 28 01:52:04.577223 ignition[791]: Ignition finished successfully Jan 28 01:52:04.865408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:52:05.056738 ignition[800]: Ignition 2.19.0 Jan 28 01:52:05.056797 ignition[800]: Stage: disks Jan 28 01:52:05.057343 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:05.057359 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:05.060051 ignition[800]: disks: disks passed Jan 28 01:52:05.060129 ignition[800]: Ignition finished successfully Jan 28 01:52:05.124923 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:52:05.130676 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:52:05.130748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:52:05.130808 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:52:05.130863 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:52:05.130907 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:52:05.185885 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:52:05.597629 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 01:52:05.634569 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:52:05.655206 systemd-networkd[787]: eth0: Gained IPv6LL Jan 28 01:52:05.792033 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:52:06.948629 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 01:52:06.956039 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:52:06.990120 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:52:07.116330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:52:07.154060 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:52:07.235926 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:52:07.362159 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Jan 28 01:52:07.362202 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:52:07.362224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:52:07.362245 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:52:07.236079 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:52:07.237546 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:52:07.624312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:52:07.679681 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:52:07.740265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:52:07.849315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:52:08.386312 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:52:08.441222 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:52:08.485248 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:52:08.570940 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:52:09.644750 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:52:09.676804 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:52:09.737229 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:52:09.837555 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:52:09.854430 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:52:09.986689 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:52:10.360159 ignition[932]: INFO : Ignition 2.19.0 Jan 28 01:52:10.360159 ignition[932]: INFO : Stage: mount Jan 28 01:52:10.392104 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:10.392104 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:10.392104 ignition[932]: INFO : mount: mount passed Jan 28 01:52:10.392104 ignition[932]: INFO : Ignition finished successfully Jan 28 01:52:10.431223 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:52:10.558437 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:52:10.794284 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:52:10.830677 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Jan 28 01:52:10.853840 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 01:52:10.854325 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:52:10.855808 kernel: BTRFS info (device vda6): using free space tree Jan 28 01:52:10.929884 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 01:52:10.948109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:52:11.236052 ignition[964]: INFO : Ignition 2.19.0 Jan 28 01:52:11.253074 ignition[964]: INFO : Stage: files Jan 28 01:52:11.253074 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:11.253074 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:11.253074 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:52:11.343206 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:52:11.343206 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:52:11.343206 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:52:11.343206 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:52:11.343206 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:52:11.341014 unknown[964]: wrote ssh authorized keys file for user: core Jan 28 01:52:11.486885 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:52:11.486885 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 01:52:11.903428 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:52:14.673899 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:52:14.722944 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 28 01:52:15.340034 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:52:26.778968 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:52:26.778968 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:52:26.858162 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:52:26.930001 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:52:28.381047 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:52:28.512840 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:52:28.512840 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:52:28.512840 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:52:28.512840 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:52:28.609618 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:52:28.609618 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:52:28.609618 ignition[964]: INFO : files: files passed Jan 28 01:52:28.609618 ignition[964]: INFO : Ignition finished successfully Jan 28 01:52:28.533828 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:52:28.797113 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:52:28.872995 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:52:28.944888 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:52:28.945129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:52:29.047393 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:52:29.081115 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:52:29.081115 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:52:29.133248 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:52:29.120039 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:52:29.173253 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:52:29.234220 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:52:29.521820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:52:29.522149 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:52:29.579312 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:52:29.785947 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:52:29.874826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:52:30.281185 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:52:30.703311 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:52:30.829011 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:52:30.897177 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:52:30.939926 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:52:31.111883 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:52:31.129348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:52:31.129843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:52:31.186990 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:52:31.220387 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:52:31.232102 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:52:31.267986 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:52:31.297208 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:52:31.344690 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:52:31.460229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:52:31.486744 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:52:31.487022 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:52:31.487161 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:52:31.487730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:52:31.488382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:52:31.544102 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:52:31.568040 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:52:31.580812 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:52:31.583843 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:52:31.588964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:52:31.589168 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:52:31.625307 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:52:31.625907 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:52:31.665212 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:52:31.665332 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:52:31.674961 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:52:31.845913 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:52:31.882978 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:52:31.906093 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:52:31.906241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:52:31.919120 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:52:31.919269 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:52:32.077005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:52:32.077215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:52:32.198250 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:52:32.198431 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:52:32.309086 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:52:32.326382 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:52:32.326911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:52:32.371971 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:52:32.687342 ignition[1019]: INFO : Ignition 2.19.0 Jan 28 01:52:32.687342 ignition[1019]: INFO : Stage: umount Jan 28 01:52:32.687342 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:52:32.687342 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:52:32.687342 ignition[1019]: INFO : umount: umount passed Jan 28 01:52:32.687342 ignition[1019]: INFO : Ignition finished successfully Jan 28 01:52:32.409431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:52:32.410115 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:52:32.415128 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:52:32.415342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:52:32.425899 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:52:32.426074 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:52:32.452776 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:52:32.494303 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:52:32.494987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:52:32.524435 systemd[1]: Stopped target network.target - Network. Jan 28 01:52:32.532070 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:52:32.532208 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:52:32.532355 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:52:32.532439 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:52:32.532749 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:52:32.532831 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:52:32.532934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:52:32.533008 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:52:32.533420 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:52:32.537103 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:52:32.687203 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:52:32.687407 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:52:32.705427 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:52:32.707960 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 28 01:52:32.731983 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:52:32.903430 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:52:32.903848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:52:32.941703 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:52:32.941949 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:52:33.486700 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:52:33.486869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:52:33.507273 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:52:33.507397 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:52:33.594811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:52:33.671239 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:52:33.671633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:52:33.671933 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:52:33.672018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:52:33.768959 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:52:33.772246 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:52:33.831216 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:52:33.928114 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:52:33.931037 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:52:33.972648 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:52:33.972955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:52:34.074689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:52:34.074809 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:52:34.087116 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:52:34.087194 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:52:34.102658 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:52:34.102917 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:52:34.236951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:52:34.237127 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:52:34.260929 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:52:34.261100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:52:34.463832 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:52:34.481822 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:52:34.481939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:52:34.554868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:52:34.555070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:52:34.593919 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:52:34.594118 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:52:34.617559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:52:34.750995 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:52:34.824403 systemd[1]: Switching root. Jan 28 01:52:34.908316 systemd-journald[194]: Journal stopped Jan 28 01:52:46.772428 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 28 01:52:46.779875 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:52:46.779908 kernel: SELinux: policy capability open_perms=1 Jan 28 01:52:46.779927 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:52:46.780136 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:52:46.780173 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:52:46.780197 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:52:46.780216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:52:46.780232 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:52:46.780250 kernel: audit: type=1403 audit(1769565155.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:52:46.780269 systemd[1]: Successfully loaded SELinux policy in 147.299ms. Jan 28 01:52:46.780297 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.596ms. Jan 28 01:52:46.780317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 01:52:46.780334 systemd[1]: Detected virtualization kvm. Jan 28 01:52:46.785914 systemd[1]: Detected architecture x86-64. Jan 28 01:52:46.785947 systemd[1]: Detected first boot. Jan 28 01:52:46.785966 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:52:46.785996 zram_generator::config[1064]: No configuration found. Jan 28 01:52:46.786020 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:52:46.786037 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:52:46.786055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:52:46.786075 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:52:46.786100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:52:46.786122 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:52:46.786141 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:52:46.786263 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:52:46.786282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:52:46.786302 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:52:46.786322 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:52:46.786338 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:52:46.786438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:52:46.791565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:52:46.791591 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:52:46.791610 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:52:46.791628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:52:46.791741 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:52:46.791764 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:52:46.791781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:52:46.791798 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:52:46.791822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:52:46.791839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:52:46.791856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:52:46.791873 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:52:46.791890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:52:46.791907 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:52:46.791924 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:52:46.792030 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:52:46.792053 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:52:46.792070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:52:46.792087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:52:46.792104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:52:46.792121 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:52:46.792138 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:52:46.792158 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:52:46.792178 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:52:46.792199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:52:46.792224 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:52:46.792242 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:52:46.792258 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:52:46.792278 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:52:46.792293 systemd[1]: Reached target machines.target - Containers. Jan 28 01:52:46.792310 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:52:46.792331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:52:46.792350 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:52:46.792366 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:52:46.792387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:52:46.792405 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:52:46.799025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:52:46.799062 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:52:46.799080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:52:46.799097 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:52:46.799119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:52:46.799138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:52:46.799162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:52:46.799181 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:52:46.799199 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:52:46.799216 kernel: loop: module loaded Jan 28 01:52:46.799234 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:52:46.799253 kernel: fuse: init (API version 7.39) Jan 28 01:52:46.799269 kernel: ACPI: bus type drm_connector registered Jan 28 01:52:46.799286 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:52:46.799350 systemd-journald[1148]: Collecting audit messages is disabled. Jan 28 01:52:46.799610 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:52:46.799636 systemd-journald[1148]: Journal started Jan 28 01:52:46.799769 systemd-journald[1148]: Runtime Journal (/run/log/journal/c21178a75afd47fb9afbd38fa7d48843) is 6.0M, max 48.3M, 42.2M free. Jan 28 01:52:41.678431 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:52:41.975333 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:52:41.978025 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:52:41.979787 systemd[1]: systemd-journald.service: Consumed 3.489s CPU time. Jan 28 01:52:46.898890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:52:46.927053 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 01:52:46.978875 systemd[1]: Stopped verity-setup.service. Jan 28 01:52:47.030743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:52:47.042948 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:52:47.057185 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:52:47.076622 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:52:47.099236 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:52:47.122132 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:52:47.144314 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:52:47.161174 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:52:47.173165 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:52:47.189073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:52:47.203083 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:52:47.203984 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:52:47.224228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:52:47.225139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:52:47.239378 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:52:47.240092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:52:47.253332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:52:47.253913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:52:47.283426 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:52:47.283977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:52:47.356626 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:52:47.357108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:52:47.407895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:52:47.461244 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:52:47.504002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:52:47.551263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:52:47.851207 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:52:47.962934 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:52:48.065379 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:52:48.135264 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:52:48.138794 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:52:48.220115 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 01:52:48.340962 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:52:48.490113 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:52:48.545748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:52:48.650622 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:52:48.739922 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:52:48.796182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:52:48.812102 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:52:48.845291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:52:48.877240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:52:49.093583 systemd-journald[1148]: Time spent on flushing to /var/log/journal/c21178a75afd47fb9afbd38fa7d48843 is 231.868ms for 983 entries. Jan 28 01:52:49.093583 systemd-journald[1148]: System Journal (/var/log/journal/c21178a75afd47fb9afbd38fa7d48843) is 8.0M, max 195.6M, 187.6M free. Jan 28 01:52:49.745996 systemd-journald[1148]: Received client request to flush runtime journal. Jan 28 01:52:49.746127 kernel: loop0: detected capacity change from 0 to 219144 Jan 28 01:52:49.166644 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:52:49.209186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:52:49.241931 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 01:52:49.279374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:52:49.299314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:52:49.344339 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:52:49.460657 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:52:49.673291 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:52:49.866297 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 01:52:49.933785 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:52:50.092261 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 01:52:50.222006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:52:50.223858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:52:50.249761 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 01:52:50.475123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:52:50.519329 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:52:50.581059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:52:50.760417 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 01:52:51.391924 kernel: loop2: detected capacity change from 0 to 142488 Jan 28 01:52:52.187848 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 28 01:52:52.189224 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 28 01:52:52.382674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:52:52.549352 kernel: loop3: detected capacity change from 0 to 219144 Jan 28 01:52:53.826043 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 01:52:54.532310 kernel: loop5: detected capacity change from 0 to 142488 Jan 28 01:52:55.259077 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 01:52:55.280924 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 28 01:52:55.550112 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:52:55.550137 systemd[1]: Reloading... Jan 28 01:52:57.883662 zram_generator::config[1236]: No configuration found. Jan 28 01:53:00.420300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:53:00.867948 systemd[1]: Reloading finished in 5311 ms. Jan 28 01:53:01.217003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:53:01.333367 systemd[1]: Starting ensure-sysext.service... Jan 28 01:53:01.426927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:53:01.746693 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:53:01.746719 systemd[1]: Reloading... Jan 28 01:53:01.765619 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:53:02.755315 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:53:02.760242 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:53:02.763879 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:53:02.764438 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 28 01:53:02.764956 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 28 01:53:02.780087 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:53:02.780362 systemd-tmpfiles[1267]: Skipping /boot Jan 28 01:53:02.903158 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:53:02.944159 systemd-tmpfiles[1267]: Skipping /boot Jan 28 01:53:03.149686 zram_generator::config[1291]: No configuration found. Jan 28 01:53:05.452086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:53:05.635304 systemd[1]: Reloading finished in 3883 ms. Jan 28 01:53:06.198072 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:53:06.303068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:53:06.474851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:53:06.842208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 01:53:06.995280 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:53:07.072344 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:53:07.233633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:53:07.420317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:53:07.635305 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:53:07.841941 augenrules[1356]: No rules Jan 28 01:53:07.849316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:53:07.889404 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 01:53:08.009045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:53:08.009638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:53:08.019224 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jan 28 01:53:08.101876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:53:08.163196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:53:08.247130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:53:08.304848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:53:08.311179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:53:08.388198 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:53:08.486230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:53:08.552356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:53:08.556358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:53:08.599280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:53:08.600318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:53:08.636064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:53:08.636733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:53:08.687404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:53:08.774726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:53:09.101218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:53:09.110749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:53:09.176935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:53:09.221934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:53:09.333928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:53:09.496648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:53:09.535022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:53:09.580023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:53:09.625716 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:53:09.642364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:53:09.647586 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:53:09.724327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:53:09.732118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:53:09.765261 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:53:09.766334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:53:09.786388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:53:09.788240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:53:09.870001 systemd[1]: Finished ensure-sysext.service. Jan 28 01:53:09.887310 systemd-resolved[1345]: Positive Trust Anchors: Jan 28 01:53:09.888083 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:53:09.888226 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:53:09.900723 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1393) Jan 28 01:53:09.917313 systemd-resolved[1345]: Defaulting to hostname 'linux'. Jan 28 01:53:09.952385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:53:10.064676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:53:10.065321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:53:10.146848 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:53:10.287864 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:53:10.324727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:53:10.325171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:53:10.387049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:53:10.441605 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:53:10.443071 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:53:10.673955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:53:10.723602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 01:53:10.723971 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:53:10.823152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:53:10.855210 systemd-networkd[1402]: lo: Link UP Jan 28 01:53:10.855226 systemd-networkd[1402]: lo: Gained carrier Jan 28 01:53:10.865041 systemd-networkd[1402]: Enumeration completed Jan 28 01:53:10.879122 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:53:10.879300 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:53:10.879309 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:53:10.898429 systemd-networkd[1402]: eth0: Link UP Jan 28 01:53:10.898615 systemd-networkd[1402]: eth0: Gained carrier Jan 28 01:53:10.898640 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:53:10.919990 systemd[1]: Reached target network.target - Network. Jan 28 01:53:10.936742 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:53:11.054671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:53:11.142065 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 01:53:11.156960 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:53:11.323343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:53:11.323910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:53:11.397735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:53:11.493270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:53:11.670936 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:53:11.764218 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:53:12.527324 systemd-resolved[1345]: Clock change detected. Flushing caches. Jan 28 01:53:12.527819 systemd-timesyncd[1417]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:53:12.527999 systemd-timesyncd[1417]: Initial clock synchronization to Wed 2026-01-28 01:53:12.527151 UTC. Jan 28 01:53:12.543791 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:53:12.622004 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:53:12.631726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:53:12.666343 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 01:53:12.705312 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:53:13.123410 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 28 01:53:13.164285 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:53:13.171204 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:53:14.177348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:53:16.601737 kernel: kvm_amd: TSC scaling supported Jan 28 01:53:16.602462 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:53:16.608146 kernel: kvm_amd: Nested Paging enabled Jan 28 01:53:16.608208 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:53:16.623314 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:53:18.806091 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:53:19.041448 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 01:53:19.223009 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 01:53:19.660270 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:53:20.195409 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 01:53:20.285307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:53:20.315848 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:53:20.341289 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:53:20.424389 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:53:20.469028 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:53:20.497086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:53:20.522868 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:53:20.546790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:53:20.547115 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:53:20.570116 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:53:20.617757 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:53:20.671289 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:53:20.782217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:53:20.812900 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 01:53:20.839158 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:53:20.855052 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 01:53:20.853709 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:53:20.875920 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:53:20.921740 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:53:20.922047 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:53:20.945211 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:53:21.069047 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:53:21.175913 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:53:21.233706 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:53:21.275032 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:53:21.301165 jq[1446]: false Jan 28 01:53:21.311667 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:53:21.329742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:53:21.368887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:53:21.416698 dbus-daemon[1445]: [system] SELinux support is enabled Jan 28 01:53:21.420033 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:53:21.482771 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:53:21.487844 extend-filesystems[1447]: Found loop3 Jan 28 01:53:21.530858 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:53:21.585865 extend-filesystems[1447]: Found loop4 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found loop5 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found sr0 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda1 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda2 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda3 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found usr Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda4 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda6 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda7 Jan 28 01:53:21.585865 extend-filesystems[1447]: Found vda9 Jan 28 01:53:21.585865 extend-filesystems[1447]: Checking size of /dev/vda9 Jan 28 01:53:22.520300 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 01:53:22.520432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1393) Jan 28 01:53:21.765376 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:53:22.530792 extend-filesystems[1447]: Resized partition /dev/vda9 Jan 28 01:53:22.708465 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 01:53:21.960067 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:53:22.902378 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jan 28 01:53:22.014368 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:53:22.972368 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:53:22.972368 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:53:22.972368 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 01:53:22.015849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:53:23.244802 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Jan 28 01:53:22.020916 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:53:23.482868 update_engine[1473]: I20260128 01:53:22.673200 1473 main.cc:92] Flatcar Update Engine starting Jan 28 01:53:23.482868 update_engine[1473]: I20260128 01:53:22.685887 1473 update_check_scheduler.cc:74] Next update check in 4m58s Jan 28 01:53:22.328436 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:53:23.492707 jq[1476]: true Jan 28 01:53:22.539410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:53:22.743826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 01:53:22.898871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:53:22.906220 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:53:22.943874 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 01:53:22.943915 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:53:22.946817 systemd-logind[1471]: New seat seat0. Jan 28 01:53:23.308928 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:53:23.503775 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:53:23.504208 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:53:23.681842 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:53:23.682308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:53:23.735283 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:53:23.826124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:53:23.826454 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:53:23.997315 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:53:24.021259 jq[1482]: true Jan 28 01:53:24.250035 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:53:24.265110 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:53:24.527930 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:53:24.680185 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:53:24.683486 tar[1481]: linux-amd64/LICENSE Jan 28 01:53:24.683486 tar[1481]: linux-amd64/helm Jan 28 01:53:24.697286 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:53:24.745918 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:53:24.761835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:53:24.762242 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:53:24.762427 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:53:24.786521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:53:24.786937 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:53:24.887256 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:53:25.174912 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:53:25.201196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:53:25.334343 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:53:25.384794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:53:25.591469 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:53:25.657366 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:53148.service - OpenSSH per-connection server daemon (10.0.0.1:53148). Jan 28 01:53:26.091444 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:53:26.097036 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:53:26.352907 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:53:26.495687 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:53:26.514837 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:53:26.628126 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:53:26.727151 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:53:26.809425 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:53:27.325824 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 53148 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:27.435740 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:27.772166 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:53:28.191820 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:53:28.236208 systemd-logind[1471]: New session 1 of user core. Jan 28 01:53:28.856641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:53:28.933866 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:53:29.197243 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:53:30.473867 containerd[1483]: time="2026-01-28T01:53:30.473766394Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 01:53:30.735861 systemd[1549]: Queued start job for default target default.target. Jan 28 01:53:30.774372 systemd[1549]: Created slice app.slice - User Application Slice. Jan 28 01:53:30.774418 systemd[1549]: Reached target paths.target - Paths. Jan 28 01:53:30.774701 systemd[1549]: Reached target timers.target - Timers. Jan 28 01:53:30.807186 containerd[1483]: time="2026-01-28T01:53:30.798146174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.804050 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:53:30.906748 containerd[1483]: time="2026-01-28T01:53:30.900378595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:53:30.906748 containerd[1483]: time="2026-01-28T01:53:30.901687379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 01:53:30.906748 containerd[1483]: time="2026-01-28T01:53:30.901794659Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 01:53:30.906748 containerd[1483]: time="2026-01-28T01:53:30.902511548Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 01:53:30.912677 containerd[1483]: time="2026-01-28T01:53:30.905730427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.912677 containerd[1483]: time="2026-01-28T01:53:30.912363721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:53:30.912677 containerd[1483]: time="2026-01-28T01:53:30.912391433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.922646 containerd[1483]: time="2026-01-28T01:53:30.922433239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:53:30.922646 containerd[1483]: time="2026-01-28T01:53:30.922479945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.922646 containerd[1483]: time="2026-01-28T01:53:30.922696049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:53:30.922913 containerd[1483]: time="2026-01-28T01:53:30.922773614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.923163251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.925027782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.925383106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.925410096Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.925795625Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 01:53:30.926734 containerd[1483]: time="2026-01-28T01:53:30.926101928Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:53:30.955386 containerd[1483]: time="2026-01-28T01:53:30.954809454Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 01:53:30.955386 containerd[1483]: time="2026-01-28T01:53:30.954904702Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 01:53:30.955386 containerd[1483]: time="2026-01-28T01:53:30.955053580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 01:53:30.955386 containerd[1483]: time="2026-01-28T01:53:30.955084448Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 01:53:30.955386 containerd[1483]: time="2026-01-28T01:53:30.955176430Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 01:53:30.955755 containerd[1483]: time="2026-01-28T01:53:30.955496838Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956379105Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956850155Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956876444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956895349Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956917189Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956935684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.956951794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957034288Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957058143Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957076918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957093549Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957109348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957136930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.959494 containerd[1483]: time="2026-01-28T01:53:30.957155895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957172517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957265490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957292410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957316405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957336212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957354797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957377048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957473679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957498044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.957519514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.959247450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.960637155Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.961181371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.961351578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.961863 containerd[1483]: time="2026-01-28T01:53:30.961494485Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.961875617Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.963337867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.963701916Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.964081104Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.964222158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.964244148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.964798434Z" level=info msg="NRI interface is disabled by configuration." Jan 28 01:53:30.981254 containerd[1483]: time="2026-01-28T01:53:30.965144109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.965927912Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.966077521Z" level=info msg="Connect containerd service" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.966131092Z" level=info msg="using legacy CRI server" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.966141531Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.967389471Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973251615Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973503856Z" level=info msg="Start subscribing containerd event" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973646252Z" level=info msg="Start recovering state" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973750326Z" level=info msg="Start event monitor" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973772999Z" level=info msg="Start snapshots syncer" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973789980Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.973800400Z" level=info msg="Start streaming server" Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.976398300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:53:30.981682 containerd[1483]: time="2026-01-28T01:53:30.976498197Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:53:31.014768 containerd[1483]: time="2026-01-28T01:53:30.985396308Z" level=info msg="containerd successfully booted in 0.556462s" Jan 28 01:53:30.981734 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:53:30.986793 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:53:30.987630 systemd[1549]: Reached target sockets.target - Sockets. Jan 28 01:53:30.987674 systemd[1549]: Reached target basic.target - Basic System. Jan 28 01:53:30.987764 systemd[1549]: Reached target default.target - Main User Target. Jan 28 01:53:30.987836 systemd[1549]: Startup finished in 1.490s. Jan 28 01:53:31.017341 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:53:31.060421 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:53:31.631922 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:40992.service - OpenSSH per-connection server daemon (10.0.0.1:40992). Jan 28 01:53:32.785336 tar[1481]: linux-amd64/README.md Jan 28 01:53:32.814138 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 40992 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:32.848902 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:33.050460 systemd-logind[1471]: New session 2 of user core. Jan 28 01:53:33.084489 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:53:33.096325 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:53:33.533656 sshd[1564]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:33.593351 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:40992.service: Deactivated successfully. Jan 28 01:53:33.620917 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 01:53:33.629887 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Jan 28 01:53:33.717136 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:33098.service - OpenSSH per-connection server daemon (10.0.0.1:33098). Jan 28 01:53:33.738885 systemd-logind[1471]: Removed session 2. Jan 28 01:53:34.031892 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33098 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:34.037266 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:34.064659 systemd-logind[1471]: New session 3 of user core. Jan 28 01:53:34.079294 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:53:34.464042 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:34.615321 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:33098.service: Deactivated successfully. Jan 28 01:53:34.622459 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:53:34.628311 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:53:34.639140 systemd-logind[1471]: Removed session 3. Jan 28 01:53:36.419420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:53:36.425688 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:53:36.431053 systemd[1]: Startup finished in 14.593s (kernel) + 49.337s (initrd) + 1min 275ms (userspace) = 2min 4.206s. Jan 28 01:53:36.432984 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:53:44.232217 kubelet[1589]: E0128 01:53:44.227874 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:53:44.260782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:53:44.261185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:53:44.275360 systemd[1]: kubelet.service: Consumed 6.640s CPU time. Jan 28 01:53:44.752433 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:36248.service - OpenSSH per-connection server daemon (10.0.0.1:36248). Jan 28 01:53:44.972374 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 36248 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:44.979331 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:45.153453 systemd-logind[1471]: New session 4 of user core. Jan 28 01:53:45.193116 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:53:45.429020 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:45.557109 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:36248.service: Deactivated successfully. Jan 28 01:53:45.614754 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:53:45.624469 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:53:45.646351 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:36256.service - OpenSSH per-connection server daemon (10.0.0.1:36256). Jan 28 01:53:45.659816 systemd-logind[1471]: Removed session 4. Jan 28 01:53:45.802852 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 36256 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:45.870661 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:45.924085 systemd-logind[1471]: New session 5 of user core. Jan 28 01:53:45.971127 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:53:46.096018 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:46.144940 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:36256.service: Deactivated successfully. Jan 28 01:53:46.187695 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:53:46.221330 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:53:46.302915 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:36262.service - OpenSSH per-connection server daemon (10.0.0.1:36262). Jan 28 01:53:46.355076 systemd-logind[1471]: Removed session 5. Jan 28 01:53:46.529785 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 36262 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:46.529027 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:46.584944 systemd-logind[1471]: New session 6 of user core. Jan 28 01:53:46.588650 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:53:46.727204 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 28 01:53:46.786007 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:36262.service: Deactivated successfully. Jan 28 01:53:46.804367 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:53:46.820942 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:53:46.843405 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:36270.service - OpenSSH per-connection server daemon (10.0.0.1:36270). Jan 28 01:53:46.891497 systemd-logind[1471]: Removed session 6. Jan 28 01:53:46.992810 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 36270 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 01:53:46.997110 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:53:47.058657 systemd-logind[1471]: New session 7 of user core. Jan 28 01:53:47.080321 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:53:47.464219 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:53:47.470429 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:53:54.323445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:53:54.370016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:53:56.302750 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:53:56.398350 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:53:57.461258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:53:57.534678 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:53:59.377695 kubelet[1648]: E0128 01:53:59.367501 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:53:59.411718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:53:59.412111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:53:59.439362 systemd[1]: kubelet.service: Consumed 2.060s CPU time. Jan 28 01:54:01.847480 dockerd[1643]: time="2026-01-28T01:54:01.844036239Z" level=info msg="Starting up" Jan 28 01:54:03.728004 systemd[1]: var-lib-docker-metacopy\x2dcheck537356133-merged.mount: Deactivated successfully. Jan 28 01:54:04.197277 dockerd[1643]: time="2026-01-28T01:54:04.194971947Z" level=info msg="Loading containers: start." Jan 28 01:54:07.090637 kernel: Initializing XFRM netlink socket Jan 28 01:54:08.285792 update_engine[1473]: I20260128 01:54:08.269704 1473 update_attempter.cc:509] Updating boot flags... Jan 28 01:54:08.935860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1758) Jan 28 01:54:09.337951 systemd-networkd[1402]: docker0: Link UP Jan 28 01:54:09.628080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:54:09.721911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:54:09.842062 dockerd[1643]: time="2026-01-28T01:54:09.835989072Z" level=info msg="Loading containers: done." Jan 28 01:54:10.846185 dockerd[1643]: time="2026-01-28T01:54:10.835039235Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:54:10.846185 dockerd[1643]: time="2026-01-28T01:54:10.847993059Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 01:54:10.882954 dockerd[1643]: time="2026-01-28T01:54:10.860206274Z" level=info msg="Daemon has completed initialization" Jan 28 01:54:11.688832 dockerd[1643]: time="2026-01-28T01:54:11.678745732Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:54:11.686846 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:54:12.083231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:54:12.128451 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:54:12.862765 kubelet[1816]: E0128 01:54:12.846960 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:54:12.891860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:54:12.892156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:54:12.902881 systemd[1]: kubelet.service: Consumed 1.207s CPU time. Jan 28 01:54:21.761188 containerd[1483]: time="2026-01-28T01:54:21.759942010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 01:54:23.103513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:54:23.152504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:54:24.859933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:54:24.869175 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:54:26.769090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424594334.mount: Deactivated successfully. Jan 28 01:54:26.824289 kubelet[1840]: E0128 01:54:26.820931 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:54:26.838187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:54:26.838465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:54:26.839218 systemd[1]: kubelet.service: Consumed 1.592s CPU time. Jan 28 01:54:37.109716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:54:37.214319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:54:40.879761 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:54:40.883134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:54:41.734505 kubelet[1915]: E0128 01:54:41.732456 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:54:41.765409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:54:41.774982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:54:41.781228 systemd[1]: kubelet.service: Consumed 1.889s CPU time. Jan 28 01:54:45.672154 containerd[1483]: time="2026-01-28T01:54:45.669198053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:45.685427 containerd[1483]: time="2026-01-28T01:54:45.684809908Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 28 01:54:45.697290 containerd[1483]: time="2026-01-28T01:54:45.695502010Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:45.719810 containerd[1483]: time="2026-01-28T01:54:45.719746201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:45.738102 containerd[1483]: time="2026-01-28T01:54:45.736515041Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 23.976156059s" Jan 28 01:54:45.738102 containerd[1483]: time="2026-01-28T01:54:45.736810932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 28 01:54:45.774737 containerd[1483]: time="2026-01-28T01:54:45.774461790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 01:54:51.785902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 01:54:51.817184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:54:52.689449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:54:52.792447 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:54:53.368161 kubelet[1935]: E0128 01:54:53.367976 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:54:53.395414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:54:53.400878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:54:55.384873 containerd[1483]: time="2026-01-28T01:54:55.383019690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:55.399401 containerd[1483]: time="2026-01-28T01:54:55.399010844Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 28 01:54:55.408191 containerd[1483]: time="2026-01-28T01:54:55.403463775Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:55.436068 containerd[1483]: time="2026-01-28T01:54:55.434495754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:54:55.444975 containerd[1483]: time="2026-01-28T01:54:55.442404821Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 9.667398787s" Jan 28 01:54:55.444975 containerd[1483]: time="2026-01-28T01:54:55.444066699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 28 01:54:55.445705 containerd[1483]: time="2026-01-28T01:54:55.445305596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 01:55:14.089809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 01:55:17.018380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:55:24.265060 containerd[1483]: time="2026-01-28T01:55:24.248466973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:24.265060 containerd[1483]: time="2026-01-28T01:55:24.255956627Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 28 01:55:24.554959 containerd[1483]: time="2026-01-28T01:55:24.547106281Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:24.680793 containerd[1483]: time="2026-01-28T01:55:24.671883540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:24.706696 containerd[1483]: time="2026-01-28T01:55:24.694942854Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 29.249515603s" Jan 28 01:55:24.706696 containerd[1483]: time="2026-01-28T01:55:24.701835514Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 28 01:55:25.195308 containerd[1483]: time="2026-01-28T01:55:24.837427323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 01:55:26.135794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:55:26.363096 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:55:29.903048 kubelet[1957]: E0128 01:55:29.580183 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:55:29.979250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:55:29.979808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:55:29.981401 systemd[1]: kubelet.service: Consumed 6.199s CPU time. Jan 28 01:55:40.064749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 28 01:55:40.124260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:55:42.456403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524165618.mount: Deactivated successfully. Jan 28 01:55:43.558119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:55:43.582869 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:55:43.783638 kubelet[1982]: E0128 01:55:43.780428 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:55:43.796331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:55:43.796858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:55:43.797291 systemd[1]: kubelet.service: Consumed 1.441s CPU time. Jan 28 01:55:50.985977 containerd[1483]: time="2026-01-28T01:55:50.978360613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:50.985977 containerd[1483]: time="2026-01-28T01:55:50.983068911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 28 01:55:51.002958 containerd[1483]: time="2026-01-28T01:55:51.000498050Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:51.024784 containerd[1483]: time="2026-01-28T01:55:51.022084958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:55:51.024784 containerd[1483]: time="2026-01-28T01:55:51.023635615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 26.185921743s" Jan 28 01:55:51.024784 containerd[1483]: time="2026-01-28T01:55:51.023673938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 28 01:55:51.036947 containerd[1483]: time="2026-01-28T01:55:51.036283251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 01:55:54.053784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850542670.mount: Deactivated successfully. Jan 28 01:55:54.057334 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 28 01:55:54.193203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:55:58.465021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:55:58.986161 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:56:01.328274 kubelet[2010]: E0128 01:56:01.314519 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:56:01.356005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:56:01.356307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:56:01.373344 systemd[1]: kubelet.service: Consumed 3.350s CPU time. Jan 28 01:56:11.592036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 28 01:56:12.400127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:56:18.948452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:56:19.002738 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:56:19.801805 containerd[1483]: time="2026-01-28T01:56:19.798283370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:19.817194 containerd[1483]: time="2026-01-28T01:56:19.816808799Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 28 01:56:19.839163 containerd[1483]: time="2026-01-28T01:56:19.835431227Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:19.872787 containerd[1483]: time="2026-01-28T01:56:19.863778088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:19.872787 containerd[1483]: time="2026-01-28T01:56:19.865789148Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 28.829442067s" Jan 28 01:56:19.872787 containerd[1483]: time="2026-01-28T01:56:19.865829793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 28 01:56:19.881170 containerd[1483]: time="2026-01-28T01:56:19.881118453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 01:56:20.266083 kubelet[2068]: E0128 01:56:20.262168 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:56:20.273829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:56:20.276939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:56:20.279358 systemd[1]: kubelet.service: Consumed 3.177s CPU time. Jan 28 01:56:23.074513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183467214.mount: Deactivated successfully. Jan 28 01:56:23.153277 containerd[1483]: time="2026-01-28T01:56:23.140427166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:23.168402 containerd[1483]: time="2026-01-28T01:56:23.163195143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 28 01:56:23.176801 containerd[1483]: time="2026-01-28T01:56:23.175215334Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:23.204442 containerd[1483]: time="2026-01-28T01:56:23.201421391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:23.216842 containerd[1483]: time="2026-01-28T01:56:23.214028197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 3.331147138s" Jan 28 01:56:23.216842 containerd[1483]: time="2026-01-28T01:56:23.214144474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 28 01:56:23.259999 containerd[1483]: time="2026-01-28T01:56:23.259462134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 01:56:24.988100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206825090.mount: Deactivated successfully. Jan 28 01:56:30.308199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 28 01:56:30.353150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:56:31.227202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:56:31.297859 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:56:32.086070 kubelet[2142]: E0128 01:56:32.082347 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:56:32.109850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:56:32.110479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:56:32.120495 systemd[1]: kubelet.service: Consumed 1.326s CPU time. Jan 28 01:56:42.334219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 28 01:56:42.455498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:56:46.050899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:56:46.084298 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:56:47.610039 kubelet[2159]: E0128 01:56:47.609225 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:56:47.623330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:56:47.624011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:56:47.630901 systemd[1]: kubelet.service: Consumed 1.968s CPU time. Jan 28 01:56:53.405748 containerd[1483]: time="2026-01-28T01:56:53.405050568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:53.419909 containerd[1483]: time="2026-01-28T01:56:53.419812450Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 28 01:56:53.427281 containerd[1483]: time="2026-01-28T01:56:53.427194931Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:53.460839 containerd[1483]: time="2026-01-28T01:56:53.459351462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:56:53.485925 containerd[1483]: time="2026-01-28T01:56:53.481383013Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 30.221861118s" Jan 28 01:56:53.486191 containerd[1483]: time="2026-01-28T01:56:53.486149684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 28 01:56:57.654378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 28 01:56:57.991925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:57:00.496923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:57:00.507354 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:57:00.991262 kubelet[2201]: E0128 01:57:00.990757 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:57:01.001896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:57:01.002176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:57:01.004834 systemd[1]: kubelet.service: Consumed 1.101s CPU time. Jan 28 01:57:11.045146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 28 01:57:11.112386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:57:11.965239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:57:12.092936 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:57:14.307176 kubelet[2218]: E0128 01:57:14.306168 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:57:14.336189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:57:14.336926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:57:14.346355 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Jan 28 01:57:17.716706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:57:17.717008 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Jan 28 01:57:17.744393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:57:17.893292 systemd[1]: Reloading requested from client PID 2235 ('systemctl') (unit session-7.scope)... Jan 28 01:57:17.893910 systemd[1]: Reloading... Jan 28 01:57:19.877734 zram_generator::config[2270]: No configuration found. Jan 28 01:57:20.497361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:57:21.381710 systemd[1]: Reloading finished in 3483 ms. Jan 28 01:57:22.825896 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:57:22.826249 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:57:22.836896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:57:22.902822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:57:24.021790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:57:24.053348 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:57:24.583184 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:57:24.583184 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:57:24.583184 kubelet[2321]: I0128 01:57:24.581953 2321 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:57:27.619327 kubelet[2321]: I0128 01:57:27.609266 2321 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:57:27.619327 kubelet[2321]: I0128 01:57:27.609462 2321 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:57:27.619327 kubelet[2321]: I0128 01:57:27.609879 2321 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:57:27.619327 kubelet[2321]: I0128 01:57:27.609904 2321 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:57:27.619327 kubelet[2321]: I0128 01:57:27.613471 2321 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:57:27.947857 kubelet[2321]: E0128 01:57:27.939084 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:57:28.117298 kubelet[2321]: I0128 01:57:28.115832 2321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:57:28.302798 kubelet[2321]: E0128 01:57:28.288716 2321 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:57:28.302798 kubelet[2321]: I0128 01:57:28.289016 2321 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 01:57:28.386923 kubelet[2321]: I0128 01:57:28.385718 2321 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:57:28.403754 kubelet[2321]: I0128 01:57:28.403033 2321 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:57:28.406699 kubelet[2321]: I0128 01:57:28.403091 2321 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:57:28.406699 kubelet[2321]: I0128 01:57:28.404885 2321 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:57:28.406699 kubelet[2321]: I0128 01:57:28.404906 2321 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:57:28.406699 kubelet[2321]: I0128 01:57:28.405493 2321 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:57:28.485893 kubelet[2321]: I0128 01:57:28.483298 2321 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:57:28.494632 kubelet[2321]: I0128 01:57:28.493434 2321 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:57:28.494632 kubelet[2321]: I0128 01:57:28.493619 2321 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:57:28.494632 kubelet[2321]: I0128 01:57:28.493936 2321 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:57:28.494632 kubelet[2321]: I0128 01:57:28.494137 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:57:28.501067 kubelet[2321]: E0128 01:57:28.497302 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:57:28.505700 kubelet[2321]: E0128 01:57:28.505626 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:57:28.521463 kubelet[2321]: I0128 01:57:28.521421 2321 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:57:28.528100 kubelet[2321]: I0128 01:57:28.528014 2321 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:57:28.528305 kubelet[2321]: I0128 01:57:28.528111 2321 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:57:28.528662 kubelet[2321]: W0128 01:57:28.528497 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:57:28.561786 kubelet[2321]: I0128 01:57:28.561652 2321 server.go:1262] "Started kubelet" Jan 28 01:57:28.562681 kubelet[2321]: I0128 01:57:28.562609 2321 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:57:28.566587 kubelet[2321]: I0128 01:57:28.563073 2321 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:57:28.566587 kubelet[2321]: I0128 01:57:28.563256 2321 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:57:28.566587 kubelet[2321]: I0128 01:57:28.564477 2321 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:57:28.576520 kubelet[2321]: I0128 01:57:28.575728 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:57:28.586024 kubelet[2321]: I0128 01:57:28.583218 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:57:28.591840 kubelet[2321]: I0128 01:57:28.591513 2321 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:57:28.596768 kubelet[2321]: I0128 01:57:28.596336 2321 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:57:28.601285 kubelet[2321]: E0128 01:57:28.600211 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:57:28.606272 kubelet[2321]: I0128 01:57:28.606245 2321 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:57:28.609126 kubelet[2321]: I0128 01:57:28.609098 2321 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:57:28.610841 kubelet[2321]: E0128 01:57:28.609232 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Jan 28 01:57:28.611253 kubelet[2321]: I0128 01:57:28.609868 2321 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:57:28.611741 kubelet[2321]: E0128 01:57:28.611630 2321 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:57:28.611868 kubelet[2321]: I0128 01:57:28.611847 2321 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:57:28.613441 kubelet[2321]: E0128 01:57:28.612357 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:57:28.613441 kubelet[2321]: E0128 01:57:28.598650 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec266604f7057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,LastTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:57:28.620486 kubelet[2321]: I0128 01:57:28.620455 2321 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:57:28.702364 kubelet[2321]: E0128 01:57:28.701888 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:57:28.785504 kubelet[2321]: I0128 01:57:28.782362 2321 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:57:28.785816 kubelet[2321]: I0128 01:57:28.785794 2321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:57:28.785997 kubelet[2321]: I0128 01:57:28.785975 2321 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:57:28.802502 kubelet[2321]: I0128 01:57:28.802165 2321 policy_none.go:49] "None policy: Start" Jan 28 01:57:28.813493 kubelet[2321]: I0128 01:57:28.806780 2321 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:57:28.813493 kubelet[2321]: I0128 01:57:28.806882 2321 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:57:28.813493 kubelet[2321]: E0128 01:57:28.812156 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Jan 28 01:57:28.818901 kubelet[2321]: E0128 01:57:28.818641 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:57:28.855980 kubelet[2321]: I0128 01:57:28.851487 2321 policy_none.go:47] "Start" Jan 28 01:57:28.873670 kubelet[2321]: I0128 01:57:28.872750 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:57:28.881613 kubelet[2321]: I0128 01:57:28.880899 2321 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:57:28.885060 kubelet[2321]: I0128 01:57:28.881866 2321 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:57:28.885060 kubelet[2321]: I0128 01:57:28.883025 2321 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:57:28.885060 kubelet[2321]: E0128 01:57:28.883100 2321 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:57:28.885060 kubelet[2321]: E0128 01:57:28.884220 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:57:28.890139 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:57:28.926724 kubelet[2321]: E0128 01:57:28.922321 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:57:28.985217 kubelet[2321]: E0128 01:57:28.983300 2321 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:57:28.985691 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:57:29.004172 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:57:29.041661 kubelet[2321]: E0128 01:57:29.039842 2321 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:57:29.063183 kubelet[2321]: E0128 01:57:29.054286 2321 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:57:29.063183 kubelet[2321]: I0128 01:57:29.055060 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:57:29.063183 kubelet[2321]: I0128 01:57:29.055080 2321 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:57:29.063183 kubelet[2321]: I0128 01:57:29.056217 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:57:29.066915 kubelet[2321]: E0128 01:57:29.066685 2321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:57:29.067102 kubelet[2321]: E0128 01:57:29.066946 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:57:29.211962 kubelet[2321]: I0128 01:57:29.191793 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:29.211962 kubelet[2321]: E0128 01:57:29.192903 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:29.223955 kubelet[2321]: E0128 01:57:29.215081 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Jan 28 01:57:29.236519 kubelet[2321]: I0128 01:57:29.235790 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:57:29.236519 kubelet[2321]: I0128 01:57:29.235851 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:57:29.236519 kubelet[2321]: I0128 01:57:29.235886 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:57:29.372332 kubelet[2321]: I0128 01:57:29.357776 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:29.372332 kubelet[2321]: I0128 01:57:29.357970 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:29.372332 kubelet[2321]: I0128 01:57:29.358167 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:29.372332 kubelet[2321]: I0128 01:57:29.358196 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:29.372332 kubelet[2321]: I0128 01:57:29.358240 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:57:29.416989 kubelet[2321]: I0128 01:57:29.358275 2321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:29.434460 kubelet[2321]: E0128 01:57:29.434134 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:57:29.470155 kubelet[2321]: I0128 01:57:29.459919 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:29.470155 kubelet[2321]: E0128 01:57:29.460480 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:29.500288 systemd[1]: Created slice kubepods-burstable-pod12913e82473ca46e3e35faf803792d11.slice - libcontainer container kubepods-burstable-pod12913e82473ca46e3e35faf803792d11.slice. Jan 28 01:57:29.540621 kubelet[2321]: E0128 01:57:29.539888 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:57:29.630306 kubelet[2321]: E0128 01:57:29.624806 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:29.641946 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 28 01:57:29.673349 kubelet[2321]: E0128 01:57:29.671273 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:29.673349 kubelet[2321]: E0128 01:57:29.672240 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:29.675020 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 28 01:57:29.693024 kubelet[2321]: E0128 01:57:29.692239 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:29.698496 kubelet[2321]: E0128 01:57:29.695212 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:29.704461 containerd[1483]: time="2026-01-28T01:57:29.696878203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12913e82473ca46e3e35faf803792d11,Namespace:kube-system,Attempt:0,}" Jan 28 01:57:29.704461 containerd[1483]: time="2026-01-28T01:57:29.700256841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 28 01:57:29.711689 kubelet[2321]: E0128 01:57:29.709878 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:29.712220 containerd[1483]: time="2026-01-28T01:57:29.711103725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 28 01:57:29.838150 kubelet[2321]: E0128 01:57:29.837433 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:57:29.891221 kubelet[2321]: E0128 01:57:29.889294 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:57:29.909775 kubelet[2321]: I0128 01:57:29.907180 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:29.925707 kubelet[2321]: E0128 01:57:29.925329 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:30.009666 kubelet[2321]: E0128 01:57:30.002767 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:57:30.021839 kubelet[2321]: E0128 01:57:30.021281 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Jan 28 01:57:30.750172 kubelet[2321]: I0128 01:57:30.734144 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:30.763462 kubelet[2321]: E0128 01:57:30.754097 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:31.425831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135713690.mount: Deactivated successfully. Jan 28 01:57:31.477662 containerd[1483]: time="2026-01-28T01:57:31.476408618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:57:31.507931 containerd[1483]: time="2026-01-28T01:57:31.507815918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 01:57:31.516002 containerd[1483]: time="2026-01-28T01:57:31.515945806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:57:31.527090 kubelet[2321]: E0128 01:57:31.526272 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:57:31.535490 containerd[1483]: time="2026-01-28T01:57:31.534869179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:57:31.550961 containerd[1483]: time="2026-01-28T01:57:31.550893473Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:57:31.564903 containerd[1483]: time="2026-01-28T01:57:31.564069418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:57:31.571278 containerd[1483]: time="2026-01-28T01:57:31.571168809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:57:31.594832 containerd[1483]: time="2026-01-28T01:57:31.594723996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:57:31.611829 containerd[1483]: time="2026-01-28T01:57:31.611237954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.910900223s" Jan 28 01:57:31.630749 kubelet[2321]: E0128 01:57:31.626843 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Jan 28 01:57:31.633885 containerd[1483]: time="2026-01-28T01:57:31.633716718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.936441547s" Jan 28 01:57:31.635315 containerd[1483]: time="2026-01-28T01:57:31.634876198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.923547404s" Jan 28 01:57:31.829229 kubelet[2321]: E0128 01:57:31.818347 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:57:32.376937 kubelet[2321]: I0128 01:57:32.376888 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:32.385477 kubelet[2321]: E0128 01:57:32.380490 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:32.670752 kubelet[2321]: E0128 01:57:32.644203 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:57:32.934498 kubelet[2321]: E0128 01:57:32.929480 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:57:34.101060 containerd[1483]: time="2026-01-28T01:57:34.100465840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:57:34.140950 containerd[1483]: time="2026-01-28T01:57:34.138875316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:57:34.184327 kubelet[2321]: E0128 01:57:34.175845 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.190785064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.191311482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.191332442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.191903993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.177867609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.304521 containerd[1483]: time="2026-01-28T01:57:34.258852183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.570809 containerd[1483]: time="2026-01-28T01:57:34.568931664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:57:34.570809 containerd[1483]: time="2026-01-28T01:57:34.569017072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:57:34.570809 containerd[1483]: time="2026-01-28T01:57:34.569107350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.572431 containerd[1483]: time="2026-01-28T01:57:34.572308429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:57:34.791229 systemd[1]: Started cri-containerd-caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4.scope - libcontainer container caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4. Jan 28 01:57:34.838635 systemd[1]: run-containerd-runc-k8s.io-caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4-runc.rP4V6t.mount: Deactivated successfully. Jan 28 01:57:34.838829 systemd[1]: run-containerd-runc-k8s.io-e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931-runc.mju8QZ.mount: Deactivated successfully. Jan 28 01:57:34.842933 systemd[1]: run-containerd-runc-k8s.io-01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb-runc.eeyBi6.mount: Deactivated successfully. Jan 28 01:57:34.872976 kubelet[2321]: E0128 01:57:34.872832 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="6.4s" Jan 28 01:57:34.895873 systemd[1]: Started cri-containerd-01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb.scope - libcontainer container 01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb. Jan 28 01:57:34.913099 systemd[1]: Started cri-containerd-e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931.scope - libcontainer container e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931. Jan 28 01:57:35.690772 kubelet[2321]: E0128 01:57:35.682766 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec266604f7057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,LastTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:57:35.725219 kubelet[2321]: E0128 01:57:35.725100 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:57:35.729628 containerd[1483]: time="2026-01-28T01:57:35.729177738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12913e82473ca46e3e35faf803792d11,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4\"" Jan 28 01:57:35.730916 kubelet[2321]: I0128 01:57:35.730883 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:35.756678 kubelet[2321]: E0128 01:57:35.756042 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Jan 28 01:57:35.765074 containerd[1483]: time="2026-01-28T01:57:35.765026270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\"" Jan 28 01:57:35.775032 kubelet[2321]: E0128 01:57:35.774999 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:35.779820 kubelet[2321]: E0128 01:57:35.779780 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:35.837629 containerd[1483]: time="2026-01-28T01:57:35.836512306Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:57:35.856803 containerd[1483]: time="2026-01-28T01:57:35.853652869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\"" Jan 28 01:57:35.856937 kubelet[2321]: E0128 01:57:35.854611 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:35.862661 containerd[1483]: time="2026-01-28T01:57:35.861198654Z" level=info msg="CreateContainer within sandbox \"caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:57:35.907083 containerd[1483]: time="2026-01-28T01:57:35.907022180Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:57:36.049709 kubelet[2321]: E0128 01:57:36.049239 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:57:36.405135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222477031.mount: Deactivated successfully. Jan 28 01:57:36.429948 containerd[1483]: time="2026-01-28T01:57:36.429389897Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4\"" Jan 28 01:57:36.452589 containerd[1483]: time="2026-01-28T01:57:36.450802920Z" level=info msg="StartContainer for \"7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4\"" Jan 28 01:57:36.486192 containerd[1483]: time="2026-01-28T01:57:36.485993357Z" level=info msg="CreateContainer within sandbox \"caa1150c54c3498e989766d9aaca1f45c4c76c92c06b37a797329f9269d7b3b4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6dc96720282dd76f7c86a9c950d23e33c2c557c174f5635982042be1c2143146\"" Jan 28 01:57:36.512399 containerd[1483]: time="2026-01-28T01:57:36.512161451Z" level=info msg="StartContainer for \"6dc96720282dd76f7c86a9c950d23e33c2c557c174f5635982042be1c2143146\"" Jan 28 01:57:36.589088 containerd[1483]: time="2026-01-28T01:57:36.576116345Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8\"" Jan 28 01:57:36.682190 containerd[1483]: time="2026-01-28T01:57:36.672875701Z" level=info msg="StartContainer for \"76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8\"" Jan 28 01:57:37.020702 kubelet[2321]: E0128 01:57:37.017470 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:57:37.266699 systemd[1]: Started cri-containerd-6dc96720282dd76f7c86a9c950d23e33c2c557c174f5635982042be1c2143146.scope - libcontainer container 6dc96720282dd76f7c86a9c950d23e33c2c557c174f5635982042be1c2143146. Jan 28 01:57:37.318965 systemd[1]: Started cri-containerd-7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4.scope - libcontainer container 7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4. Jan 28 01:57:37.343318 systemd[1]: Started cri-containerd-76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8.scope - libcontainer container 76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8. Jan 28 01:57:37.714311 containerd[1483]: time="2026-01-28T01:57:37.713492602Z" level=info msg="StartContainer for \"7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4\" returns successfully" Jan 28 01:57:37.838008 kubelet[2321]: E0128 01:57:37.808951 2321 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:57:37.886189 containerd[1483]: time="2026-01-28T01:57:37.885136446Z" level=info msg="StartContainer for \"6dc96720282dd76f7c86a9c950d23e33c2c557c174f5635982042be1c2143146\" returns successfully" Jan 28 01:57:38.077449 containerd[1483]: time="2026-01-28T01:57:38.077064794Z" level=info msg="StartContainer for \"76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8\" returns successfully" Jan 28 01:57:38.815895 kubelet[2321]: E0128 01:57:38.813792 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:38.815895 kubelet[2321]: E0128 01:57:38.814102 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:38.815895 kubelet[2321]: E0128 01:57:38.814645 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:38.815895 kubelet[2321]: E0128 01:57:38.814783 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:38.827438 kubelet[2321]: E0128 01:57:38.826802 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:38.827438 kubelet[2321]: E0128 01:57:38.827012 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:39.120787 kubelet[2321]: E0128 01:57:39.102436 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:57:39.954771 kubelet[2321]: E0128 01:57:39.953160 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:39.954771 kubelet[2321]: E0128 01:57:39.953642 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:39.965155 kubelet[2321]: E0128 01:57:39.958436 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:39.965155 kubelet[2321]: E0128 01:57:39.958853 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:39.965155 kubelet[2321]: E0128 01:57:39.960244 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:39.965155 kubelet[2321]: E0128 01:57:39.960489 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:41.027762 kubelet[2321]: E0128 01:57:41.011671 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:41.096370 kubelet[2321]: E0128 01:57:41.057069 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:41.096370 kubelet[2321]: E0128 01:57:41.077905 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:41.096370 kubelet[2321]: E0128 01:57:41.078244 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:41.096370 kubelet[2321]: E0128 01:57:41.078725 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:41.096370 kubelet[2321]: E0128 01:57:41.078989 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:42.166912 kubelet[2321]: I0128 01:57:42.166822 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:42.202139 kubelet[2321]: E0128 01:57:42.197176 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:42.202139 kubelet[2321]: E0128 01:57:42.201811 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:48.730138 kubelet[2321]: E0128 01:57:48.729123 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:48.730138 kubelet[2321]: E0128 01:57:48.729673 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:49.120102 kubelet[2321]: E0128 01:57:49.119225 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:57:51.321869 kubelet[2321]: E0128 01:57:51.321411 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 01:57:51.435922 kubelet[2321]: E0128 01:57:51.424896 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:51.435922 kubelet[2321]: E0128 01:57:51.425197 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:52.180081 kubelet[2321]: E0128 01:57:52.180005 2321 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 28 01:57:52.329769 kubelet[2321]: E0128 01:57:52.329219 2321 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:57:53.390101 kubelet[2321]: E0128 01:57:53.388167 2321 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec266604f7057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,LastTimestamp:2026-01-28 01:57:28.561455191 +0000 UTC m=+4.474753332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:57:53.597171 kubelet[2321]: E0128 01:57:53.592912 2321 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec2666341f23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:57:28.610902588 +0000 UTC m=+4.524200749,LastTimestamp:2026-01-28 01:57:28.610902588 +0000 UTC m=+4.524200749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:57:53.823652 kubelet[2321]: I0128 01:57:53.821971 2321 apiserver.go:52] "Watching apiserver" Jan 28 01:57:53.930688 kubelet[2321]: I0128 01:57:53.921006 2321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:57:53.988801 kubelet[2321]: E0128 01:57:53.987896 2321 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188ec2666d53d6bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:57:28.779847357 +0000 UTC m=+4.693145499,LastTimestamp:2026-01-28 01:57:28.779847357 +0000 UTC m=+4.693145499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:57:54.412687 kubelet[2321]: E0128 01:57:54.412499 2321 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:57:55.134722 kubelet[2321]: E0128 01:57:55.133032 2321 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:57:55.992790 kubelet[2321]: E0128 01:57:55.992750 2321 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:57:57.173169 kubelet[2321]: E0128 01:57:57.168522 2321 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 28 01:57:57.327514 kubelet[2321]: E0128 01:57:57.318350 2321 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:57:57.335172 kubelet[2321]: E0128 01:57:57.334969 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:58.402813 kubelet[2321]: E0128 01:57:58.400807 2321 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:57:59.122224 kubelet[2321]: E0128 01:57:59.121861 2321 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:57:59.184662 kubelet[2321]: I0128 01:57:59.182377 2321 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:57:59.223067 kubelet[2321]: I0128 01:57:59.223023 2321 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:57:59.224024 kubelet[2321]: E0128 01:57:59.223508 2321 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:57:59.323929 kubelet[2321]: I0128 01:57:59.313641 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:57:59.461818 kubelet[2321]: I0128 01:57:59.460728 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:57:59.472052 kubelet[2321]: E0128 01:57:59.470118 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:59.564058 kubelet[2321]: E0128 01:57:59.563364 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:57:59.586084 kubelet[2321]: I0128 01:57:59.583674 2321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:57:59.658487 kubelet[2321]: E0128 01:57:59.632205 2321 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:03.318218 kubelet[2321]: E0128 01:58:03.288505 2321 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.308s" Jan 28 01:58:09.578184 kubelet[2321]: E0128 01:58:09.577983 2321 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.289s" Jan 28 01:58:09.769301 kubelet[2321]: I0128 01:58:09.768297 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.768135848 podStartE2EDuration="10.768135848s" podCreationTimestamp="2026-01-28 01:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:58:09.763693548 +0000 UTC m=+45.676991699" watchObservedRunningTime="2026-01-28 01:58:09.768135848 +0000 UTC m=+45.681433990" Jan 28 01:58:09.926145 kubelet[2321]: I0128 01:58:09.925394 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.925371784 podStartE2EDuration="10.925371784s" podCreationTimestamp="2026-01-28 01:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:58:09.92037925 +0000 UTC m=+45.833677411" watchObservedRunningTime="2026-01-28 01:58:09.925371784 +0000 UTC m=+45.838669925" Jan 28 01:58:10.027412 kubelet[2321]: I0128 01:58:10.027152 2321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=11.027127189 podStartE2EDuration="11.027127189s" podCreationTimestamp="2026-01-28 01:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:58:10.01480792 +0000 UTC m=+45.928106101" watchObservedRunningTime="2026-01-28 01:58:10.027127189 +0000 UTC m=+45.940425360" Jan 28 01:58:17.644327 systemd[1]: Reloading requested from client PID 2614 ('systemctl') (unit session-7.scope)... Jan 28 01:58:17.649390 systemd[1]: Reloading... Jan 28 01:58:18.252709 zram_generator::config[2656]: No configuration found. Jan 28 01:58:18.665161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:58:19.057289 systemd[1]: Reloading finished in 1404 ms. Jan 28 01:58:19.433023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:58:19.516911 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:58:19.520176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:58:19.521393 systemd[1]: kubelet.service: Consumed 12.360s CPU time, 134.0M memory peak, 0B memory swap peak. Jan 28 01:58:19.578678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:58:21.232174 update_engine[1473]: I20260128 01:58:21.228425 1473 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:58:21.232174 update_engine[1473]: I20260128 01:58:21.229050 1473 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:58:21.237727 update_engine[1473]: I20260128 01:58:21.237680 1473 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.248036 1473 omaha_request_params.cc:62] Current group set to lts Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.249825 1473 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.249862 1473 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.250008 1473 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.250674 1473 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.251034 1473 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:58:21.258973 update_engine[1473]: I20260128 01:58:21.251058 1473 omaha_request_action.cc:272] Request: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.258973 update_engine[1473]: Jan 28 01:58:21.263498 update_engine[1473]: I20260128 01:58:21.263374 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:58:21.290025 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:58:21.318314 update_engine[1473]: I20260128 01:58:21.318249 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:58:21.335835 update_engine[1473]: I20260128 01:58:21.335700 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:58:21.365475 update_engine[1473]: E20260128 01:58:21.362969 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:58:21.365475 update_engine[1473]: I20260128 01:58:21.363492 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:58:21.871083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:58:21.941104 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:58:22.309168 kubelet[2699]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:58:22.309168 kubelet[2699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:58:22.309168 kubelet[2699]: I0128 01:58:22.303347 2699 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:58:22.390818 kubelet[2699]: I0128 01:58:22.386967 2699 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:58:22.390818 kubelet[2699]: I0128 01:58:22.387071 2699 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:58:22.390818 kubelet[2699]: I0128 01:58:22.387117 2699 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:58:22.390818 kubelet[2699]: I0128 01:58:22.387129 2699 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:58:22.390818 kubelet[2699]: I0128 01:58:22.387712 2699 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:58:22.426434 kubelet[2699]: I0128 01:58:22.425910 2699 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:58:22.442500 kubelet[2699]: I0128 01:58:22.442162 2699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:58:22.462735 kubelet[2699]: E0128 01:58:22.461686 2699 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:58:22.462735 kubelet[2699]: I0128 01:58:22.461745 2699 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 01:58:22.488616 kubelet[2699]: I0128 01:58:22.488470 2699 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:58:22.491773 kubelet[2699]: I0128 01:58:22.491402 2699 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:58:22.491901 kubelet[2699]: I0128 01:58:22.491499 2699 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:58:22.491901 kubelet[2699]: I0128 01:58:22.491837 2699 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:58:22.491901 kubelet[2699]: I0128 01:58:22.491851 2699 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:58:22.491901 kubelet[2699]: I0128 01:58:22.491884 2699 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:58:22.499943 kubelet[2699]: I0128 01:58:22.499470 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:58:22.502400 kubelet[2699]: I0128 01:58:22.501932 2699 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:58:22.502400 kubelet[2699]: I0128 01:58:22.501964 2699 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:58:22.503680 kubelet[2699]: I0128 01:58:22.502725 2699 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:58:22.503680 kubelet[2699]: I0128 01:58:22.502760 2699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:58:22.511850 kubelet[2699]: I0128 01:58:22.511796 2699 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:58:22.513948 kubelet[2699]: I0128 01:58:22.513702 2699 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:58:22.513948 kubelet[2699]: I0128 01:58:22.513791 2699 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:58:22.574515 kubelet[2699]: I0128 01:58:22.565873 2699 server.go:1262] "Started kubelet" Jan 28 01:58:22.574515 kubelet[2699]: I0128 01:58:22.570036 2699 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:58:22.587254 kubelet[2699]: I0128 01:58:22.583513 2699 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:58:22.591677 kubelet[2699]: I0128 01:58:22.587444 2699 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:58:22.606646 kubelet[2699]: I0128 01:58:22.588020 2699 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:58:22.619018 kubelet[2699]: I0128 01:58:22.618388 2699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:58:22.626825 kubelet[2699]: I0128 01:58:22.626425 2699 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:58:22.630452 kubelet[2699]: I0128 01:58:22.630116 2699 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:58:22.630844 kubelet[2699]: I0128 01:58:22.630744 2699 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:58:22.631038 kubelet[2699]: I0128 01:58:22.631014 2699 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:58:22.632799 kubelet[2699]: I0128 01:58:22.632506 2699 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:58:22.641881 kubelet[2699]: E0128 01:58:22.635082 2699 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:58:22.642432 kubelet[2699]: I0128 01:58:22.640926 2699 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:58:22.649702 kubelet[2699]: I0128 01:58:22.645901 2699 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:58:22.655707 kubelet[2699]: I0128 01:58:22.655100 2699 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:58:22.827369 kubelet[2699]: I0128 01:58:22.822856 2699 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865293 2699 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865328 2699 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865367 2699 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865725 2699 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865744 2699 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865783 2699 policy_none.go:49] "None policy: Start" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865799 2699 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865817 2699 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865985 2699 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 01:58:22.868472 kubelet[2699]: I0128 01:58:22.865999 2699 policy_none.go:47] "Start" Jan 28 01:58:22.889387 kubelet[2699]: I0128 01:58:22.889355 2699 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:58:22.889877 kubelet[2699]: I0128 01:58:22.889858 2699 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:58:22.889976 kubelet[2699]: I0128 01:58:22.889965 2699 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:58:22.891255 kubelet[2699]: E0128 01:58:22.890291 2699 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:58:22.977979 kubelet[2699]: E0128 01:58:22.975795 2699 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:58:22.977979 kubelet[2699]: I0128 01:58:22.976361 2699 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:58:22.977979 kubelet[2699]: I0128 01:58:22.976384 2699 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:58:22.994655 kubelet[2699]: I0128 01:58:22.986946 2699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:58:22.994655 kubelet[2699]: E0128 01:58:22.993790 2699 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:58:23.025368 kubelet[2699]: I0128 01:58:23.018885 2699 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:58:23.025368 kubelet[2699]: I0128 01:58:23.021103 2699 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:58:23.030870 kubelet[2699]: I0128 01:58:23.028127 2699 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.030870 kubelet[2699]: I0128 01:58:23.028749 2699 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:23.036863 containerd[1483]: time="2026-01-28T01:58:23.036362391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:58:23.038663 kubelet[2699]: I0128 01:58:23.037834 2699 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:58:23.142483 kubelet[2699]: I0128 01:58:23.141014 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:23.142483 kubelet[2699]: I0128 01:58:23.141140 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.142483 kubelet[2699]: I0128 01:58:23.141253 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.142483 kubelet[2699]: I0128 01:58:23.141288 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.142483 kubelet[2699]: I0128 01:58:23.141314 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.142956 kubelet[2699]: I0128 01:58:23.141337 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.142956 kubelet[2699]: I0128 01:58:23.141366 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:58:23.142956 kubelet[2699]: I0128 01:58:23.141386 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:23.153023 kubelet[2699]: I0128 01:58:23.152452 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12913e82473ca46e3e35faf803792d11-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12913e82473ca46e3e35faf803792d11\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:23.207999 kubelet[2699]: E0128 01:58:23.207802 2699 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:58:23.260066 kubelet[2699]: E0128 01:58:23.255432 2699 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:23.260066 kubelet[2699]: E0128 01:58:23.256904 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:23.271996 kubelet[2699]: I0128 01:58:23.267990 2699 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:58:23.277111 kubelet[2699]: E0128 01:58:23.276893 2699 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:58:23.329148 kubelet[2699]: E0128 01:58:23.328901 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:23.519457 kubelet[2699]: E0128 01:58:23.509419 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:23.519457 kubelet[2699]: I0128 01:58:23.514254 2699 apiserver.go:52] "Watching apiserver" Jan 28 01:58:23.588851 kubelet[2699]: I0128 01:58:23.579872 2699 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:58:23.589284 kubelet[2699]: I0128 01:58:23.589241 2699 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:58:23.632097 kubelet[2699]: I0128 01:58:23.631956 2699 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:58:23.999003 kubelet[2699]: E0128 01:58:23.995409 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:24.006510 kubelet[2699]: E0128 01:58:24.006328 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:24.010884 kubelet[2699]: I0128 01:58:24.006859 2699 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:24.091481 kubelet[2699]: I0128 01:58:24.083825 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad0687da-603f-4f71-a271-9d1a46d26daf-kube-proxy\") pod \"kube-proxy-t5xcc\" (UID: \"ad0687da-603f-4f71-a271-9d1a46d26daf\") " pod="kube-system/kube-proxy-t5xcc" Jan 28 01:58:24.091481 kubelet[2699]: I0128 01:58:24.084708 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad0687da-603f-4f71-a271-9d1a46d26daf-xtables-lock\") pod \"kube-proxy-t5xcc\" (UID: \"ad0687da-603f-4f71-a271-9d1a46d26daf\") " pod="kube-system/kube-proxy-t5xcc" Jan 28 01:58:24.091481 kubelet[2699]: I0128 01:58:24.084762 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad0687da-603f-4f71-a271-9d1a46d26daf-lib-modules\") pod \"kube-proxy-t5xcc\" (UID: \"ad0687da-603f-4f71-a271-9d1a46d26daf\") " pod="kube-system/kube-proxy-t5xcc" Jan 28 01:58:24.091481 kubelet[2699]: I0128 01:58:24.084790 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4276j\" (UniqueName: \"kubernetes.io/projected/ad0687da-603f-4f71-a271-9d1a46d26daf-kube-api-access-4276j\") pod \"kube-proxy-t5xcc\" (UID: \"ad0687da-603f-4f71-a271-9d1a46d26daf\") " pod="kube-system/kube-proxy-t5xcc" Jan 28 01:58:24.180853 kubelet[2699]: E0128 01:58:24.174119 2699 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 01:58:24.184887 systemd[1]: Created slice kubepods-besteffort-podad0687da_603f_4f71_a271_9d1a46d26daf.slice - libcontainer container kubepods-besteffort-podad0687da_603f_4f71_a271_9d1a46d26daf.slice. Jan 28 01:58:24.190877 kubelet[2699]: E0128 01:58:24.190845 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:24.624692 kubelet[2699]: E0128 01:58:24.622338 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:24.648642 containerd[1483]: time="2026-01-28T01:58:24.647483348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xcc,Uid:ad0687da-603f-4f71-a271-9d1a46d26daf,Namespace:kube-system,Attempt:0,}" Jan 28 01:58:25.027087 kubelet[2699]: E0128 01:58:25.021876 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:25.027087 kubelet[2699]: E0128 01:58:25.025374 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:25.267715 containerd[1483]: time="2026-01-28T01:58:25.253152654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:58:25.267715 containerd[1483]: time="2026-01-28T01:58:25.256910583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:58:25.267715 containerd[1483]: time="2026-01-28T01:58:25.256935730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:58:25.267715 containerd[1483]: time="2026-01-28T01:58:25.259845172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:58:25.577911 systemd[1]: run-containerd-runc-k8s.io-0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810-runc.bQmDL0.mount: Deactivated successfully. Jan 28 01:58:25.652125 systemd[1]: Started cri-containerd-0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810.scope - libcontainer container 0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810. Jan 28 01:58:25.844905 containerd[1483]: time="2026-01-28T01:58:25.844453946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5xcc,Uid:ad0687da-603f-4f71-a271-9d1a46d26daf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810\"" Jan 28 01:58:25.862783 kubelet[2699]: E0128 01:58:25.857922 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:25.940791 containerd[1483]: time="2026-01-28T01:58:25.939301128Z" level=info msg="CreateContainer within sandbox \"0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:58:26.092697 containerd[1483]: time="2026-01-28T01:58:26.092495258Z" level=info msg="CreateContainer within sandbox \"0d6ab4cee5f03d9eb6acdc7c8d4a12f990407b41c9317dacf92540262955f810\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"112979f49ecf492f9ad7d45aa86d99fd1c1fbb0189a51c73d5cb42eb16ec730d\"" Jan 28 01:58:26.100103 containerd[1483]: time="2026-01-28T01:58:26.099780275Z" level=info msg="StartContainer for \"112979f49ecf492f9ad7d45aa86d99fd1c1fbb0189a51c73d5cb42eb16ec730d\"" Jan 28 01:58:26.418327 systemd[1]: Started cri-containerd-112979f49ecf492f9ad7d45aa86d99fd1c1fbb0189a51c73d5cb42eb16ec730d.scope - libcontainer container 112979f49ecf492f9ad7d45aa86d99fd1c1fbb0189a51c73d5cb42eb16ec730d. Jan 28 01:58:26.618648 containerd[1483]: time="2026-01-28T01:58:26.618362465Z" level=info msg="StartContainer for \"112979f49ecf492f9ad7d45aa86d99fd1c1fbb0189a51c73d5cb42eb16ec730d\" returns successfully" Jan 28 01:58:27.111019 kubelet[2699]: E0128 01:58:27.103520 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:27.267417 kubelet[2699]: I0128 01:58:27.265037 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5xcc" podStartSLOduration=4.265012692 podStartE2EDuration="4.265012692s" podCreationTimestamp="2026-01-28 01:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:58:27.256918867 +0000 UTC m=+5.272428607" watchObservedRunningTime="2026-01-28 01:58:27.265012692 +0000 UTC m=+5.280522451" Jan 28 01:58:28.156458 kubelet[2699]: E0128 01:58:28.154670 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:30.301697 kubelet[2699]: E0128 01:58:30.301381 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:30.411477 kubelet[2699]: E0128 01:58:30.399467 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:31.233703 update_engine[1473]: I20260128 01:58:31.229694 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:58:31.237769 update_engine[1473]: I20260128 01:58:31.236380 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:58:31.247084 update_engine[1473]: I20260128 01:58:31.246994 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:58:31.255448 kubelet[2699]: I0128 01:58:31.255283 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-run\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.269341 kubelet[2699]: I0128 01:58:31.256032 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqz9v\" (UniqueName: \"kubernetes.io/projected/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-kube-api-access-nqz9v\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.269341 kubelet[2699]: I0128 01:58:31.256084 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-cni-plugin\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.269341 kubelet[2699]: I0128 01:58:31.256107 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-cni\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.269341 kubelet[2699]: I0128 01:58:31.256131 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-flannel-cfg\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.290341 kubelet[2699]: E0128 01:58:31.288904 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:31.290519 update_engine[1473]: E20260128 01:58:31.288971 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:58:31.290519 update_engine[1473]: I20260128 01:58:31.289072 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:58:31.305435 kubelet[2699]: I0128 01:58:31.271100 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9fd811d-863e-4b0f-91f0-6f0756ad40ca-xtables-lock\") pod \"kube-flannel-ds-jxk6l\" (UID: \"a9fd811d-863e-4b0f-91f0-6f0756ad40ca\") " pod="kube-flannel/kube-flannel-ds-jxk6l" Jan 28 01:58:31.332829 kubelet[2699]: E0128 01:58:31.320982 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:31.438086 systemd[1]: Created slice kubepods-burstable-poda9fd811d_863e_4b0f_91f0_6f0756ad40ca.slice - libcontainer container kubepods-burstable-poda9fd811d_863e_4b0f_91f0_6f0756ad40ca.slice. Jan 28 01:58:31.880499 kubelet[2699]: E0128 01:58:31.878275 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:32.180739 kubelet[2699]: E0128 01:58:32.177028 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:32.197464 containerd[1483]: time="2026-01-28T01:58:32.189835267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jxk6l,Uid:a9fd811d-863e-4b0f-91f0-6f0756ad40ca,Namespace:kube-flannel,Attempt:0,}" Jan 28 01:58:32.360671 kubelet[2699]: E0128 01:58:32.354132 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:32.482878 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 28 01:58:32.569997 sshd[1619]: pam_unix(sshd:session): session closed for user core Jan 28 01:58:32.688708 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:36270.service: Deactivated successfully. Jan 28 01:58:32.716074 containerd[1483]: time="2026-01-28T01:58:32.711393330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:58:32.716074 containerd[1483]: time="2026-01-28T01:58:32.711518172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:58:32.716074 containerd[1483]: time="2026-01-28T01:58:32.711871359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:58:32.717327 containerd[1483]: time="2026-01-28T01:58:32.716728033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:58:32.719294 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:58:32.738108 systemd[1]: session-7.scope: Consumed 24.688s CPU time, 166.3M memory peak, 0B memory swap peak. Jan 28 01:58:32.771047 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:58:32.813459 systemd-logind[1471]: Removed session 7. Jan 28 01:58:32.895388 systemd[1]: run-containerd-runc-k8s.io-986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53-runc.NAc2W5.mount: Deactivated successfully. Jan 28 01:58:32.958804 systemd[1]: Started cri-containerd-986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53.scope - libcontainer container 986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53. Jan 28 01:58:33.484492 containerd[1483]: time="2026-01-28T01:58:33.484440158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-jxk6l,Uid:a9fd811d-863e-4b0f-91f0-6f0756ad40ca,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\"" Jan 28 01:58:33.498016 kubelet[2699]: E0128 01:58:33.489493 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:33.509678 containerd[1483]: time="2026-01-28T01:58:33.496009677Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jan 28 01:58:36.848052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707319709.mount: Deactivated successfully. Jan 28 01:58:37.437723 containerd[1483]: time="2026-01-28T01:58:37.436710973Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:37.457050 containerd[1483]: time="2026-01-28T01:58:37.456908461Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jan 28 01:58:37.467815 containerd[1483]: time="2026-01-28T01:58:37.464308827Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:37.488319 containerd[1483]: time="2026-01-28T01:58:37.488106874Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:37.493752 containerd[1483]: time="2026-01-28T01:58:37.492053444Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 3.995990148s" Jan 28 01:58:37.493752 containerd[1483]: time="2026-01-28T01:58:37.492107486Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jan 28 01:58:37.529054 containerd[1483]: time="2026-01-28T01:58:37.528936131Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 28 01:58:37.632022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078327005.mount: Deactivated successfully. Jan 28 01:58:37.726729 containerd[1483]: time="2026-01-28T01:58:37.718258775Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54\"" Jan 28 01:58:37.726729 containerd[1483]: time="2026-01-28T01:58:37.719735271Z" level=info msg="StartContainer for \"75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54\"" Jan 28 01:58:38.002847 systemd[1]: Started cri-containerd-75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54.scope - libcontainer container 75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54. Jan 28 01:58:38.247125 systemd[1]: cri-containerd-75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54.scope: Deactivated successfully. Jan 28 01:58:38.262759 containerd[1483]: time="2026-01-28T01:58:38.262086769Z" level=info msg="StartContainer for \"75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54\" returns successfully" Jan 28 01:58:38.472735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54-rootfs.mount: Deactivated successfully. Jan 28 01:58:38.489412 kubelet[2699]: E0128 01:58:38.488932 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:38.687852 containerd[1483]: time="2026-01-28T01:58:38.687042311Z" level=info msg="shim disconnected" id=75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54 namespace=k8s.io Jan 28 01:58:38.687852 containerd[1483]: time="2026-01-28T01:58:38.687346207Z" level=warning msg="cleaning up after shim disconnected" id=75f3872341812e396a89429db4dae9c3f5c4ce8b1771e97a68f5f6893ff51a54 namespace=k8s.io Jan 28 01:58:38.687852 containerd[1483]: time="2026-01-28T01:58:38.687421797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:58:38.910852 containerd[1483]: time="2026-01-28T01:58:38.910411478Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:58:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:58:39.539431 kubelet[2699]: E0128 01:58:39.527115 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:39.546396 containerd[1483]: time="2026-01-28T01:58:39.534010289Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jan 28 01:58:41.229298 update_engine[1473]: I20260128 01:58:41.227681 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:58:41.229298 update_engine[1473]: I20260128 01:58:41.228125 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:58:41.229298 update_engine[1473]: I20260128 01:58:41.228725 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:58:44.240954 update_engine[1473]: E20260128 01:58:44.240819 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:58:44.240954 update_engine[1473]: I20260128 01:58:44.240921 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:58:49.687899 containerd[1483]: time="2026-01-28T01:58:49.687471356Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:49.691460 containerd[1483]: time="2026-01-28T01:58:49.691366272Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jan 28 01:58:49.700312 containerd[1483]: time="2026-01-28T01:58:49.698078776Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:49.713947 containerd[1483]: time="2026-01-28T01:58:49.711063214Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:58:49.713947 containerd[1483]: time="2026-01-28T01:58:49.713766901Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 10.179704895s" Jan 28 01:58:49.713947 containerd[1483]: time="2026-01-28T01:58:49.713809521Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jan 28 01:58:49.733086 containerd[1483]: time="2026-01-28T01:58:49.732483398Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:58:49.793507 containerd[1483]: time="2026-01-28T01:58:49.790069826Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96\"" Jan 28 01:58:49.793507 containerd[1483]: time="2026-01-28T01:58:49.792723964Z" level=info msg="StartContainer for \"676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96\"" Jan 28 01:58:49.969879 systemd[1]: Started cri-containerd-676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96.scope - libcontainer container 676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96. Jan 28 01:58:50.224338 systemd[1]: cri-containerd-676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96.scope: Deactivated successfully. Jan 28 01:58:50.264723 kubelet[2699]: I0128 01:58:50.258332 2699 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 01:58:50.295724 containerd[1483]: time="2026-01-28T01:58:50.292005453Z" level=info msg="StartContainer for \"676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96\" returns successfully" Jan 28 01:58:50.620499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96-rootfs.mount: Deactivated successfully. Jan 28 01:58:50.685845 kubelet[2699]: E0128 01:58:50.676742 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:50.685845 kubelet[2699]: I0128 01:58:50.683857 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8bg\" (UniqueName: \"kubernetes.io/projected/8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e-kube-api-access-dp8bg\") pod \"coredns-66bc5c9577-8zsdk\" (UID: \"8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e\") " pod="kube-system/coredns-66bc5c9577-8zsdk" Jan 28 01:58:50.685845 kubelet[2699]: I0128 01:58:50.683900 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e-config-volume\") pod \"coredns-66bc5c9577-8zsdk\" (UID: \"8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e\") " pod="kube-system/coredns-66bc5c9577-8zsdk" Jan 28 01:58:50.702820 systemd[1]: Created slice kubepods-burstable-pod8dbcd2ea_3f81_4e8a_8b9b_8a0c92953a0e.slice - libcontainer container kubepods-burstable-pod8dbcd2ea_3f81_4e8a_8b9b_8a0c92953a0e.slice. Jan 28 01:58:50.748358 containerd[1483]: time="2026-01-28T01:58:50.744866651Z" level=info msg="shim disconnected" id=676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96 namespace=k8s.io Jan 28 01:58:50.748358 containerd[1483]: time="2026-01-28T01:58:50.745069519Z" level=warning msg="cleaning up after shim disconnected" id=676037d935f45a904b71d592d099a3d17477a1d1f9f7ac071bff83bae697ae96 namespace=k8s.io Jan 28 01:58:50.748358 containerd[1483]: time="2026-01-28T01:58:50.745086852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:58:50.767501 systemd[1]: Created slice kubepods-burstable-pod3f0368be_853b_4263_96f4_8abe3f3461cc.slice - libcontainer container kubepods-burstable-pod3f0368be_853b_4263_96f4_8abe3f3461cc.slice. Jan 28 01:58:50.786985 kubelet[2699]: I0128 01:58:50.786710 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnx8x\" (UniqueName: \"kubernetes.io/projected/3f0368be-853b-4263-96f4-8abe3f3461cc-kube-api-access-pnx8x\") pod \"coredns-66bc5c9577-sgzjn\" (UID: \"3f0368be-853b-4263-96f4-8abe3f3461cc\") " pod="kube-system/coredns-66bc5c9577-sgzjn" Jan 28 01:58:50.786985 kubelet[2699]: I0128 01:58:50.786760 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f0368be-853b-4263-96f4-8abe3f3461cc-config-volume\") pod \"coredns-66bc5c9577-sgzjn\" (UID: \"3f0368be-853b-4263-96f4-8abe3f3461cc\") " pod="kube-system/coredns-66bc5c9577-sgzjn" Jan 28 01:58:51.037036 kubelet[2699]: E0128 01:58:51.035830 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:51.038967 containerd[1483]: time="2026-01-28T01:58:51.037996619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zsdk,Uid:8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e,Namespace:kube-system,Attempt:0,}" Jan 28 01:58:51.128685 kubelet[2699]: E0128 01:58:51.126319 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:51.137903 containerd[1483]: time="2026-01-28T01:58:51.136904269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sgzjn,Uid:3f0368be-853b-4263-96f4-8abe3f3461cc,Namespace:kube-system,Attempt:0,}" Jan 28 01:58:51.333482 containerd[1483]: time="2026-01-28T01:58:51.327993896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zsdk,Uid:8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:58:51.343272 kubelet[2699]: E0128 01:58:51.337806 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:58:51.348244 kubelet[2699]: E0128 01:58:51.345777 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-8zsdk" Jan 28 01:58:51.348244 kubelet[2699]: E0128 01:58:51.347088 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-8zsdk" Jan 28 01:58:51.352766 kubelet[2699]: E0128 01:58:51.349939 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8zsdk_kube-system(8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8zsdk_kube-system(8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-8zsdk" podUID="8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e" Jan 28 01:58:51.426408 containerd[1483]: time="2026-01-28T01:58:51.423056976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sgzjn,Uid:3f0368be-853b-4263-96f4-8abe3f3461cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:58:51.426761 kubelet[2699]: E0128 01:58:51.425396 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 28 01:58:51.426761 kubelet[2699]: E0128 01:58:51.425472 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-sgzjn" Jan 28 01:58:51.426761 kubelet[2699]: E0128 01:58:51.425503 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-66bc5c9577-sgzjn" Jan 28 01:58:51.426761 kubelet[2699]: E0128 01:58:51.425836 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sgzjn_kube-system(3f0368be-853b-4263-96f4-8abe3f3461cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sgzjn_kube-system(3f0368be-853b-4263-96f4-8abe3f3461cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-66bc5c9577-sgzjn" podUID="3f0368be-853b-4263-96f4-8abe3f3461cc" Jan 28 01:58:51.715203 kubelet[2699]: E0128 01:58:51.704448 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:51.772414 containerd[1483]: time="2026-01-28T01:58:51.767977543Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 28 01:58:51.871260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ff5b95d9b0afc2eb768dc4f68822f966658d76f6897fdde9a5442052d94709-shm.mount: Deactivated successfully. Jan 28 01:58:51.871936 systemd[1]: run-netns-cni\x2de0a5dbe4\x2d3c1d\x2d8de9\x2d49f9\x2d82c421b5a820.mount: Deactivated successfully. Jan 28 01:58:51.872034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87b2b76399645b2e14f7e1129033087140c062427e8ad5de9f5db7f3a857ff49-shm.mount: Deactivated successfully. Jan 28 01:58:51.880776 containerd[1483]: time="2026-01-28T01:58:51.880290504Z" level=info msg="CreateContainer within sandbox \"986e44f51f2b612b6599dfe43f3fd99de9257eda933b5255382f0292a299ff53\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f855999a09583ddb3eca7926f64acc43c7fd10c4ca032f3e278bbda90b46b25e\"" Jan 28 01:58:51.885853 containerd[1483]: time="2026-01-28T01:58:51.885400370Z" level=info msg="StartContainer for \"f855999a09583ddb3eca7926f64acc43c7fd10c4ca032f3e278bbda90b46b25e\"" Jan 28 01:58:52.063279 systemd[1]: Started cri-containerd-f855999a09583ddb3eca7926f64acc43c7fd10c4ca032f3e278bbda90b46b25e.scope - libcontainer container f855999a09583ddb3eca7926f64acc43c7fd10c4ca032f3e278bbda90b46b25e. Jan 28 01:58:52.327504 containerd[1483]: time="2026-01-28T01:58:52.327055491Z" level=info msg="StartContainer for \"f855999a09583ddb3eca7926f64acc43c7fd10c4ca032f3e278bbda90b46b25e\" returns successfully" Jan 28 01:58:52.745395 kubelet[2699]: E0128 01:58:52.743884 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:53.784410 kubelet[2699]: E0128 01:58:53.782269 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:58:53.972884 systemd-networkd[1402]: flannel.1: Link UP Jan 28 01:58:53.972904 systemd-networkd[1402]: flannel.1: Gained carrier Jan 28 01:58:54.229732 update_engine[1473]: I20260128 01:58:54.228415 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:58:54.229732 update_engine[1473]: I20260128 01:58:54.228952 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:58:54.229732 update_engine[1473]: I20260128 01:58:54.229341 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:58:54.268717 update_engine[1473]: E20260128 01:58:54.268210 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:58:54.268717 update_engine[1473]: I20260128 01:58:54.268306 1473 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:58:54.268717 update_engine[1473]: I20260128 01:58:54.268404 1473 omaha_request_action.cc:617] Omaha request response: Jan 28 01:58:54.268945 update_engine[1473]: E20260128 01:58:54.268775 1473 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 28 01:58:54.268945 update_engine[1473]: I20260128 01:58:54.268886 1473 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 28 01:58:54.268945 update_engine[1473]: I20260128 01:58:54.268903 1473 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:58:54.268945 update_engine[1473]: I20260128 01:58:54.268913 1473 update_attempter.cc:306] Processing Done. Jan 28 01:58:54.270943 update_engine[1473]: E20260128 01:58:54.269197 1473 update_attempter.cc:619] Update failed. Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269224 1473 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269238 1473 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269251 1473 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269398 1473 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269433 1473 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269443 1473 omaha_request_action.cc:272] Request: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269454 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.269886 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:58:54.270943 update_engine[1473]: I20260128 01:58:54.270356 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:58:54.277293 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 28 01:58:54.296094 update_engine[1473]: E20260128 01:58:54.295091 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295334 1473 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295353 1473 omaha_request_action.cc:617] Omaha request response: Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295369 1473 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295382 1473 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295392 1473 update_attempter.cc:306] Processing Done. Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295403 1473 update_attempter.cc:310] Error event sent. Jan 28 01:58:54.296094 update_engine[1473]: I20260128 01:58:54.295418 1473 update_check_scheduler.cc:74] Next update check in 46m14s Jan 28 01:58:54.297052 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 28 01:58:55.904444 systemd-networkd[1402]: flannel.1: Gained IPv6LL Jan 28 01:59:17.994982 systemd[1]: cri-containerd-7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4.scope: Deactivated successfully. Jan 28 01:59:18.105461 systemd[1]: cri-containerd-7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4.scope: Consumed 13.022s CPU time, 25.6M memory peak, 0B memory swap peak. Jan 28 01:59:34.198701 systemd[1]: cri-containerd-76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8.scope: Deactivated successfully. Jan 28 01:59:34.435156 systemd[1]: cri-containerd-76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8.scope: Consumed 11.667s CPU time, 18.6M memory peak, 0B memory swap peak. Jan 28 01:59:49.513430 kubelet[2699]: E0128 01:59:49.498836 2699 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Jan 28 01:59:50.148637 kubelet[2699]: E0128 01:59:50.114102 2699 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="51.197s" Jan 28 01:59:50.148637 kubelet[2699]: E0128 01:59:50.138666 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.148637 kubelet[2699]: E0128 01:59:50.139433 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.148637 kubelet[2699]: E0128 01:59:50.139979 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.148637 kubelet[2699]: E0128 01:59:50.141876 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.236216 kubelet[2699]: E0128 01:59:50.236177 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.244200 containerd[1483]: time="2026-01-28T01:59:50.243171254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zsdk,Uid:8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e,Namespace:kube-system,Attempt:0,}" Jan 28 01:59:50.294444 kubelet[2699]: E0128 01:59:50.276990 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:50.319002 containerd[1483]: time="2026-01-28T01:59:50.304088875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sgzjn,Uid:3f0368be-853b-4263-96f4-8abe3f3461cc,Namespace:kube-system,Attempt:0,}" Jan 28 01:59:50.310268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4-rootfs.mount: Deactivated successfully. Jan 28 01:59:50.331678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8-rootfs.mount: Deactivated successfully. Jan 28 01:59:50.353956 containerd[1483]: time="2026-01-28T01:59:50.351907772Z" level=info msg="shim disconnected" id=7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4 namespace=k8s.io Jan 28 01:59:50.353956 containerd[1483]: time="2026-01-28T01:59:50.353115409Z" level=warning msg="cleaning up after shim disconnected" id=7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4 namespace=k8s.io Jan 28 01:59:50.353956 containerd[1483]: time="2026-01-28T01:59:50.353135467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:59:50.356820 containerd[1483]: time="2026-01-28T01:59:50.356762095Z" level=info msg="shim disconnected" id=76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8 namespace=k8s.io Jan 28 01:59:50.357474 containerd[1483]: time="2026-01-28T01:59:50.357444766Z" level=warning msg="cleaning up after shim disconnected" id=76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8 namespace=k8s.io Jan 28 01:59:50.357743 containerd[1483]: time="2026-01-28T01:59:50.357718324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:59:50.416379 kubelet[2699]: I0128 01:59:50.415077 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-jxk6l" podStartSLOduration=64.190634142 podStartE2EDuration="1m20.414940401s" podCreationTimestamp="2026-01-28 01:58:30 +0000 UTC" firstStartedPulling="2026-01-28 01:58:33.494665667 +0000 UTC m=+11.510175407" lastFinishedPulling="2026-01-28 01:58:49.718971926 +0000 UTC m=+27.734481666" observedRunningTime="2026-01-28 01:58:52.822910287 +0000 UTC m=+30.838420037" watchObservedRunningTime="2026-01-28 01:59:50.414940401 +0000 UTC m=+88.430450142" Jan 28 01:59:50.699307 systemd-networkd[1402]: cni0: Link UP Jan 28 01:59:50.699321 systemd-networkd[1402]: cni0: Gained carrier Jan 28 01:59:50.738922 systemd-networkd[1402]: cni0: Lost carrier Jan 28 01:59:50.909940 systemd-networkd[1402]: veth6a04703a: Link UP Jan 28 01:59:50.928598 kernel: cni0: port 1(veth6a04703a) entered blocking state Jan 28 01:59:50.928908 kernel: cni0: port 1(veth6a04703a) entered disabled state Jan 28 01:59:50.932389 kernel: veth6a04703a: entered allmulticast mode Jan 28 01:59:50.949926 kernel: veth6a04703a: entered promiscuous mode Jan 28 01:59:50.987850 kernel: cni0: port 1(veth6a04703a) entered blocking state Jan 28 01:59:50.987960 kernel: cni0: port 1(veth6a04703a) entered forwarding state Jan 28 01:59:51.006377 kernel: cni0: port 1(veth6a04703a) entered disabled state Jan 28 01:59:51.014863 systemd-networkd[1402]: veth4c8c923c: Link UP Jan 28 01:59:51.033762 kernel: cni0: port 2(veth4c8c923c) entered blocking state Jan 28 01:59:51.033947 kernel: cni0: port 2(veth4c8c923c) entered disabled state Jan 28 01:59:51.034113 kubelet[2699]: I0128 01:59:51.027174 2699 scope.go:117] "RemoveContainer" containerID="7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4" Jan 28 01:59:51.034113 kubelet[2699]: E0128 01:59:51.027293 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:51.065360 containerd[1483]: time="2026-01-28T01:59:51.065301503Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 01:59:51.083356 kernel: veth4c8c923c: entered allmulticast mode Jan 28 01:59:51.083662 kernel: veth4c8c923c: entered promiscuous mode Jan 28 01:59:51.103960 kubelet[2699]: I0128 01:59:51.103916 2699 scope.go:117] "RemoveContainer" containerID="76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8" Jan 28 01:59:51.104507 kubelet[2699]: E0128 01:59:51.104483 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:51.138336 kernel: cni0: port 2(veth4c8c923c) entered blocking state Jan 28 01:59:51.138451 kernel: cni0: port 2(veth4c8c923c) entered forwarding state Jan 28 01:59:51.167760 kernel: cni0: port 2(veth4c8c923c) entered disabled state Jan 28 01:59:51.206187 containerd[1483]: time="2026-01-28T01:59:51.205472439Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 28 01:59:51.226776 kernel: cni0: port 1(veth6a04703a) entered blocking state Jan 28 01:59:51.226884 kernel: cni0: port 1(veth6a04703a) entered forwarding state Jan 28 01:59:51.229456 systemd-networkd[1402]: veth6a04703a: Gained carrier Jan 28 01:59:51.243768 systemd-networkd[1402]: cni0: Gained carrier Jan 28 01:59:51.290780 kernel: cni0: port 2(veth4c8c923c) entered blocking state Jan 28 01:59:51.290920 kernel: cni0: port 2(veth4c8c923c) entered forwarding state Jan 28 01:59:51.292639 systemd-networkd[1402]: veth4c8c923c: Gained carrier Jan 28 01:59:51.368803 containerd[1483]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jan 28 01:59:51.368803 containerd[1483]: delegateAdd: netconf sent to delegate plugin: Jan 28 01:59:51.387486 containerd[1483]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 28 01:59:51.387486 containerd[1483]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000192920), "name":"cbr0", "type":"bridge"} Jan 28 01:59:51.387486 containerd[1483]: delegateAdd: netconf sent to delegate plugin: Jan 28 01:59:51.444849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068867352.mount: Deactivated successfully. Jan 28 01:59:51.496725 containerd[1483]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2026-01-28T01:59:51.495332814Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542\"" Jan 28 01:59:51.497710 containerd[1483]: time="2026-01-28T01:59:51.497676117Z" level=info msg="StartContainer for \"d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542\"" Jan 28 01:59:51.609370 containerd[1483]: time="2026-01-28T01:59:51.606698250Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118\"" Jan 28 01:59:51.622349 containerd[1483]: time="2026-01-28T01:59:51.622299633Z" level=info msg="StartContainer for \"3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118\"" Jan 28 01:59:51.676776 containerd[1483]: time="2026-01-28T01:59:51.675600099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:59:51.676776 containerd[1483]: time="2026-01-28T01:59:51.675943849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:59:51.676776 containerd[1483]: time="2026-01-28T01:59:51.675966322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:59:51.676776 containerd[1483]: time="2026-01-28T01:59:51.676234620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:59:51.749404 containerd[1483]: time="2026-01-28T01:59:51.742224466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:59:51.749404 containerd[1483]: time="2026-01-28T01:59:51.742338538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:59:51.749404 containerd[1483]: time="2026-01-28T01:59:51.742358736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:59:51.749404 containerd[1483]: time="2026-01-28T01:59:51.742694741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:59:51.810909 systemd[1]: Started cri-containerd-d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542.scope - libcontainer container d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542. Jan 28 01:59:51.831397 systemd-networkd[1402]: cni0: Gained IPv6LL Jan 28 01:59:51.836520 systemd[1]: Started cri-containerd-bd47e1968aeeb4aa0d59fdd8bba3469422f59db9f392d16507500caf9e234762.scope - libcontainer container bd47e1968aeeb4aa0d59fdd8bba3469422f59db9f392d16507500caf9e234762. Jan 28 01:59:51.899472 systemd[1]: Started cri-containerd-3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118.scope - libcontainer container 3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118. Jan 28 01:59:52.032893 systemd[1]: Started cri-containerd-a294c76ca1f51512af71b337098ce8a28e7d669bd06a9bdf06e6feed2c204e5e.scope - libcontainer container a294c76ca1f51512af71b337098ce8a28e7d669bd06a9bdf06e6feed2c204e5e. Jan 28 01:59:52.046914 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:59:52.192775 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:59:52.357251 containerd[1483]: time="2026-01-28T01:59:52.357176947Z" level=info msg="StartContainer for \"d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542\" returns successfully" Jan 28 01:59:52.368844 containerd[1483]: time="2026-01-28T01:59:52.368465423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8zsdk,Uid:8dbcd2ea-3f81-4e8a-8b9b-8a0c92953a0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd47e1968aeeb4aa0d59fdd8bba3469422f59db9f392d16507500caf9e234762\"" Jan 28 01:59:52.371831 kubelet[2699]: E0128 01:59:52.371801 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:52.390359 containerd[1483]: time="2026-01-28T01:59:52.390312697Z" level=info msg="CreateContainer within sandbox \"bd47e1968aeeb4aa0d59fdd8bba3469422f59db9f392d16507500caf9e234762\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:59:52.428703 systemd-networkd[1402]: veth6a04703a: Gained IPv6LL Jan 28 01:59:52.484435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295412727.mount: Deactivated successfully. Jan 28 01:59:52.576494 containerd[1483]: time="2026-01-28T01:59:52.564508969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sgzjn,Uid:3f0368be-853b-4263-96f4-8abe3f3461cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a294c76ca1f51512af71b337098ce8a28e7d669bd06a9bdf06e6feed2c204e5e\"" Jan 28 01:59:52.633086 kubelet[2699]: E0128 01:59:52.631947 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:52.685365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881241854.mount: Deactivated successfully. Jan 28 01:59:52.741496 containerd[1483]: time="2026-01-28T01:59:52.730099231Z" level=info msg="StartContainer for \"3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118\" returns successfully" Jan 28 01:59:52.742860 containerd[1483]: time="2026-01-28T01:59:52.742820442Z" level=info msg="CreateContainer within sandbox \"a294c76ca1f51512af71b337098ce8a28e7d669bd06a9bdf06e6feed2c204e5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:59:52.766782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519959720.mount: Deactivated successfully. Jan 28 01:59:52.800203 containerd[1483]: time="2026-01-28T01:59:52.797064212Z" level=info msg="CreateContainer within sandbox \"bd47e1968aeeb4aa0d59fdd8bba3469422f59db9f392d16507500caf9e234762\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32970cae264d92658c90f8a78c70fa71da50e3e9c85a2191a840a83077c8c085\"" Jan 28 01:59:52.801614 containerd[1483]: time="2026-01-28T01:59:52.800820452Z" level=info msg="StartContainer for \"32970cae264d92658c90f8a78c70fa71da50e3e9c85a2191a840a83077c8c085\"" Jan 28 01:59:52.854437 systemd-networkd[1402]: veth4c8c923c: Gained IPv6LL Jan 28 01:59:52.881379 containerd[1483]: time="2026-01-28T01:59:52.881246619Z" level=info msg="CreateContainer within sandbox \"a294c76ca1f51512af71b337098ce8a28e7d669bd06a9bdf06e6feed2c204e5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bcd9ac3d9dcfbf76f86b2dfc2d64c15c23578d0e923fea0312137d3aa66911e\"" Jan 28 01:59:52.885846 containerd[1483]: time="2026-01-28T01:59:52.885809149Z" level=info msg="StartContainer for \"0bcd9ac3d9dcfbf76f86b2dfc2d64c15c23578d0e923fea0312137d3aa66911e\"" Jan 28 01:59:52.966867 systemd[1]: Started cri-containerd-32970cae264d92658c90f8a78c70fa71da50e3e9c85a2191a840a83077c8c085.scope - libcontainer container 32970cae264d92658c90f8a78c70fa71da50e3e9c85a2191a840a83077c8c085. Jan 28 01:59:53.094226 systemd[1]: Started cri-containerd-0bcd9ac3d9dcfbf76f86b2dfc2d64c15c23578d0e923fea0312137d3aa66911e.scope - libcontainer container 0bcd9ac3d9dcfbf76f86b2dfc2d64c15c23578d0e923fea0312137d3aa66911e. Jan 28 01:59:53.215905 containerd[1483]: time="2026-01-28T01:59:53.215449213Z" level=info msg="StartContainer for \"32970cae264d92658c90f8a78c70fa71da50e3e9c85a2191a840a83077c8c085\" returns successfully" Jan 28 01:59:53.237168 kubelet[2699]: E0128 01:59:53.236438 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:53.251273 kubelet[2699]: E0128 01:59:53.251067 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:53.285901 kubelet[2699]: E0128 01:59:53.285774 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:53.302164 kubelet[2699]: I0128 01:59:53.301937 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8zsdk" podStartSLOduration=90.301918111 podStartE2EDuration="1m30.301918111s" podCreationTimestamp="2026-01-28 01:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:59:53.30039151 +0000 UTC m=+91.315901250" watchObservedRunningTime="2026-01-28 01:59:53.301918111 +0000 UTC m=+91.317427851" Jan 28 01:59:53.388785 containerd[1483]: time="2026-01-28T01:59:53.387815505Z" level=info msg="StartContainer for \"0bcd9ac3d9dcfbf76f86b2dfc2d64c15c23578d0e923fea0312137d3aa66911e\" returns successfully" Jan 28 01:59:54.307522 kubelet[2699]: E0128 01:59:54.307479 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:54.322728 kubelet[2699]: E0128 01:59:54.322359 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:54.325522 kubelet[2699]: E0128 01:59:54.322954 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:54.325522 kubelet[2699]: E0128 01:59:54.323685 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:54.392089 kubelet[2699]: I0128 01:59:54.391960 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sgzjn" podStartSLOduration=91.391941039 podStartE2EDuration="1m31.391941039s" podCreationTimestamp="2026-01-28 01:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:59:54.39107646 +0000 UTC m=+92.406586210" watchObservedRunningTime="2026-01-28 01:59:54.391941039 +0000 UTC m=+92.407450779" Jan 28 01:59:55.329235 kubelet[2699]: E0128 01:59:55.327923 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:55.330831 kubelet[2699]: E0128 01:59:55.330272 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:55.337973 kubelet[2699]: E0128 01:59:55.337865 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:56.343217 kubelet[2699]: E0128 01:59:56.341879 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:59:56.363365 kubelet[2699]: E0128 01:59:56.355845 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:00.282685 kubelet[2699]: E0128 02:00:00.279803 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:00.367950 kubelet[2699]: E0128 02:00:00.367906 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:10.305695 kubelet[2699]: E0128 02:00:10.305171 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:10.391667 kubelet[2699]: E0128 02:00:10.390483 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:10.552077 kubelet[2699]: E0128 02:00:10.551953 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:12.901115 kubelet[2699]: E0128 02:00:12.899397 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:19.250742 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:48720.service - OpenSSH per-connection server daemon (10.0.0.1:48720). Jan 28 02:00:19.634672 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 48720 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:00:19.648086 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:19.708349 systemd-logind[1471]: New session 8 of user core. Jan 28 02:00:19.735933 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 02:00:20.651720 sshd[3871]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:20.679781 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:48720.service: Deactivated successfully. Jan 28 02:00:20.697249 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 02:00:20.702678 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Jan 28 02:00:20.713399 systemd-logind[1471]: Removed session 8. Jan 28 02:00:29.107213 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:35286.service - OpenSSH per-connection server daemon (10.0.0.1:35286). Jan 28 02:00:36.300459 systemd[1]: cri-containerd-d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542.scope: Deactivated successfully. Jan 28 02:00:36.301357 systemd[1]: cri-containerd-d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542.scope: Consumed 8.208s CPU time. Jan 28 02:00:36.860513 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 35286 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:00:36.859395 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:36.915518 systemd-logind[1471]: New session 9 of user core. Jan 28 02:00:36.941824 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 02:00:37.129449 kubelet[2699]: E0128 02:00:37.129163 2699 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.14s" Jan 28 02:00:37.286820 systemd[1]: cri-containerd-3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118.scope: Deactivated successfully. Jan 28 02:00:37.294709 systemd[1]: cri-containerd-3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118.scope: Consumed 5.144s CPU time. Jan 28 02:00:37.395792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542-rootfs.mount: Deactivated successfully. Jan 28 02:00:37.472489 containerd[1483]: time="2026-01-28T02:00:37.469270304Z" level=info msg="shim disconnected" id=d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542 namespace=k8s.io Jan 28 02:00:37.472489 containerd[1483]: time="2026-01-28T02:00:37.472328586Z" level=warning msg="cleaning up after shim disconnected" id=d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542 namespace=k8s.io Jan 28 02:00:37.472489 containerd[1483]: time="2026-01-28T02:00:37.472346888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:00:37.562464 sshd[3909]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:37.583006 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:35286.service: Deactivated successfully. Jan 28 02:00:37.614325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118-rootfs.mount: Deactivated successfully. Jan 28 02:00:37.625434 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 02:00:37.632719 containerd[1483]: time="2026-01-28T02:00:37.631837219Z" level=info msg="shim disconnected" id=3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118 namespace=k8s.io Jan 28 02:00:37.632719 containerd[1483]: time="2026-01-28T02:00:37.631918170Z" level=warning msg="cleaning up after shim disconnected" id=3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118 namespace=k8s.io Jan 28 02:00:37.632719 containerd[1483]: time="2026-01-28T02:00:37.632034075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:00:37.659837 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Jan 28 02:00:37.688088 systemd-logind[1471]: Removed session 9. Jan 28 02:00:38.221220 kubelet[2699]: I0128 02:00:38.219870 2699 scope.go:117] "RemoveContainer" containerID="76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8" Jan 28 02:00:38.254811 kubelet[2699]: I0128 02:00:38.245245 2699 scope.go:117] "RemoveContainer" containerID="3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118" Jan 28 02:00:38.254811 kubelet[2699]: E0128 02:00:38.247837 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:38.254811 kubelet[2699]: E0128 02:00:38.249759 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 28 02:00:38.280035 kubelet[2699]: I0128 02:00:38.279907 2699 scope.go:117] "RemoveContainer" containerID="d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542" Jan 28 02:00:38.289147 kubelet[2699]: E0128 02:00:38.280717 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:38.291141 kubelet[2699]: E0128 02:00:38.291087 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 28 02:00:38.314190 containerd[1483]: time="2026-01-28T02:00:38.308823797Z" level=info msg="RemoveContainer for \"76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8\"" Jan 28 02:00:38.388885 containerd[1483]: time="2026-01-28T02:00:38.388807471Z" level=info msg="RemoveContainer for \"76e291c03a00f5012ee791daa13c87ab014df1496a4d266afa4644c1e224dbe8\" returns successfully" Jan 28 02:00:38.390153 kubelet[2699]: I0128 02:00:38.390072 2699 scope.go:117] "RemoveContainer" containerID="7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4" Jan 28 02:00:38.425509 containerd[1483]: time="2026-01-28T02:00:38.425435439Z" level=info msg="RemoveContainer for \"7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4\"" Jan 28 02:00:38.498511 containerd[1483]: time="2026-01-28T02:00:38.490916057Z" level=info msg="RemoveContainer for \"7e16951bf3295209b898ddbe83fe11fd70792ebad647ee2e534299c5ab1e1cb4\" returns successfully" Jan 28 02:00:39.331823 kubelet[2699]: I0128 02:00:39.330707 2699 scope.go:117] "RemoveContainer" containerID="3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118" Jan 28 02:00:39.331823 kubelet[2699]: E0128 02:00:39.330822 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:39.334108 kubelet[2699]: E0128 02:00:39.332698 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 28 02:00:42.704083 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:44982.service - OpenSSH per-connection server daemon (10.0.0.1:44982). Jan 28 02:00:42.844693 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 44982 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:00:42.847194 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:42.895076 systemd-logind[1471]: New session 10 of user core. Jan 28 02:00:42.939095 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 02:00:42.976291 kubelet[2699]: I0128 02:00:42.971867 2699 scope.go:117] "RemoveContainer" containerID="d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542" Jan 28 02:00:42.976291 kubelet[2699]: E0128 02:00:42.972150 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:42.976291 kubelet[2699]: E0128 02:00:42.972298 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 28 02:00:43.163390 kubelet[2699]: I0128 02:00:43.162437 2699 scope.go:117] "RemoveContainer" containerID="3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118" Jan 28 02:00:43.163390 kubelet[2699]: E0128 02:00:43.162801 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:43.167190 kubelet[2699]: E0128 02:00:43.167111 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 28 02:00:43.548132 sshd[4019]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:43.580345 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:44982.service: Deactivated successfully. Jan 28 02:00:43.588481 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 02:00:43.596323 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Jan 28 02:00:43.613513 systemd-logind[1471]: Removed session 10. Jan 28 02:00:48.623061 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:44994.service - OpenSSH per-connection server daemon (10.0.0.1:44994). Jan 28 02:00:48.794419 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 44994 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:00:48.811360 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:48.873150 systemd-logind[1471]: New session 11 of user core. Jan 28 02:00:48.908238 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 02:00:49.577780 sshd[4055]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:49.592804 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:44994.service: Deactivated successfully. Jan 28 02:00:49.610819 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 02:00:49.620000 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Jan 28 02:00:49.629026 systemd-logind[1471]: Removed session 11. Jan 28 02:00:54.638450 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:56932.service - OpenSSH per-connection server daemon (10.0.0.1:56932). Jan 28 02:00:54.779167 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 56932 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:00:54.792779 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:00:54.868692 systemd-logind[1471]: New session 12 of user core. Jan 28 02:00:54.890726 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 02:00:54.905107 kubelet[2699]: I0128 02:00:54.900322 2699 scope.go:117] "RemoveContainer" containerID="3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118" Jan 28 02:00:54.905107 kubelet[2699]: E0128 02:00:54.900431 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:54.971820 containerd[1483]: time="2026-01-28T02:00:54.971312568Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Jan 28 02:00:55.195052 containerd[1483]: time="2026-01-28T02:00:55.191487140Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d\"" Jan 28 02:00:55.203001 containerd[1483]: time="2026-01-28T02:00:55.196405532Z" level=info msg="StartContainer for \"abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d\"" Jan 28 02:00:55.583103 systemd[1]: Started cri-containerd-abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d.scope - libcontainer container abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d. Jan 28 02:00:56.807324 sshd[4090]: pam_unix(sshd:session): session closed for user core Jan 28 02:00:56.837267 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:56932.service: Deactivated successfully. Jan 28 02:00:56.845869 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Jan 28 02:00:56.882396 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 02:00:56.889236 systemd[1]: session-12.scope: Consumed 1.018s CPU time. Jan 28 02:00:56.902093 kubelet[2699]: I0128 02:00:56.900822 2699 scope.go:117] "RemoveContainer" containerID="d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542" Jan 28 02:00:56.910421 systemd-logind[1471]: Removed session 12. Jan 28 02:00:56.923466 kubelet[2699]: E0128 02:00:56.912181 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:00:57.014846 containerd[1483]: time="2026-01-28T02:00:57.014364969Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 28 02:00:57.059107 containerd[1483]: time="2026-01-28T02:00:57.055807364Z" level=info msg="StartContainer for \"abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d\" returns successfully" Jan 28 02:00:59.484338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185362858.mount: Deactivated successfully. Jan 28 02:01:00.842391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238604440.mount: Deactivated successfully. Jan 28 02:01:01.803132 containerd[1483]: time="2026-01-28T02:01:01.797837232Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06\"" Jan 28 02:01:01.835324 containerd[1483]: time="2026-01-28T02:01:01.835042243Z" level=info msg="StartContainer for \"e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06\"" Jan 28 02:01:01.845330 kubelet[2699]: E0128 02:01:01.842496 2699 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.412s" Jan 28 02:01:01.993427 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:56936.service - OpenSSH per-connection server daemon (10.0.0.1:56936). Jan 28 02:01:02.104310 kubelet[2699]: E0128 02:01:02.088861 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:02.456505 systemd[1]: run-containerd-runc-k8s.io-e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06-runc.FD3QY1.mount: Deactivated successfully. Jan 28 02:01:02.465225 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 56936 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:02.476121 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:02.494130 systemd[1]: Started cri-containerd-e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06.scope - libcontainer container e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06. Jan 28 02:01:02.523165 systemd-logind[1471]: New session 13 of user core. Jan 28 02:01:02.534196 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 02:01:02.894439 kubelet[2699]: E0128 02:01:02.891848 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:02.945368 containerd[1483]: time="2026-01-28T02:01:02.935850069Z" level=info msg="StartContainer for \"e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06\" returns successfully" Jan 28 02:01:03.141339 kubelet[2699]: E0128 02:01:03.141294 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:03.151294 kubelet[2699]: E0128 02:01:03.144440 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:03.255685 sshd[4158]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:03.263700 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Jan 28 02:01:03.266251 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:56936.service: Deactivated successfully. Jan 28 02:01:03.279731 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 02:01:03.298402 systemd-logind[1471]: Removed session 13. Jan 28 02:01:09.195247 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Jan 28 02:01:10.968810 kubelet[2699]: E0128 02:01:10.968762 2699 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.979s" Jan 28 02:01:10.979435 kubelet[2699]: E0128 02:01:10.979399 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:10.993960 kubelet[2699]: E0128 02:01:10.993828 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:10.998136 kubelet[2699]: E0128 02:01:10.995263 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:11.067394 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:11.077270 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:11.119089 systemd-logind[1471]: New session 14 of user core. Jan 28 02:01:11.133354 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 02:01:11.942020 kubelet[2699]: E0128 02:01:11.936335 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:12.043448 sshd[4229]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:12.063329 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:45448.service: Deactivated successfully. Jan 28 02:01:12.082761 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 02:01:12.090377 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Jan 28 02:01:12.102228 systemd-logind[1471]: Removed session 14. Jan 28 02:01:12.952478 kubelet[2699]: E0128 02:01:12.952391 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:17.140163 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:40572.service - OpenSSH per-connection server daemon (10.0.0.1:40572). Jan 28 02:01:17.415126 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 40572 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:17.436002 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:17.503849 systemd-logind[1471]: New session 15 of user core. Jan 28 02:01:17.509427 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 02:01:18.148436 sshd[4279]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:18.173341 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:40572.service: Deactivated successfully. Jan 28 02:01:18.185402 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 02:01:18.189981 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Jan 28 02:01:18.198785 systemd-logind[1471]: Removed session 15. Jan 28 02:01:18.901288 kubelet[2699]: E0128 02:01:18.899467 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:19.897244 kubelet[2699]: E0128 02:01:19.895514 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:20.418941 kubelet[2699]: E0128 02:01:20.418374 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:22.898328 kubelet[2699]: E0128 02:01:22.897389 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:23.226266 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:54200.service - OpenSSH per-connection server daemon (10.0.0.1:54200). Jan 28 02:01:23.538966 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 54200 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:23.554055 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:23.605056 systemd-logind[1471]: New session 16 of user core. Jan 28 02:01:23.631458 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 02:01:24.474478 sshd[4316]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:24.501133 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Jan 28 02:01:24.506197 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:54200.service: Deactivated successfully. Jan 28 02:01:24.510259 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 02:01:24.517783 systemd-logind[1471]: Removed session 16. Jan 28 02:01:29.515687 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:54206.service - OpenSSH per-connection server daemon (10.0.0.1:54206). Jan 28 02:01:29.583306 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 54206 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:29.591781 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:29.614750 systemd-logind[1471]: New session 17 of user core. Jan 28 02:01:29.620177 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 02:01:30.098742 sshd[4354]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:30.111396 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:54206.service: Deactivated successfully. Jan 28 02:01:30.121828 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 02:01:30.131054 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Jan 28 02:01:30.133792 systemd-logind[1471]: Removed session 17. Jan 28 02:01:35.152496 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:37490.service - OpenSSH per-connection server daemon (10.0.0.1:37490). Jan 28 02:01:35.274387 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 37490 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:35.284937 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:35.340265 systemd-logind[1471]: New session 18 of user core. Jan 28 02:01:35.353818 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 02:01:36.180745 sshd[4389]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:36.221504 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:37490.service: Deactivated successfully. Jan 28 02:01:36.234426 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 02:01:36.236766 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Jan 28 02:01:36.250944 systemd-logind[1471]: Removed session 18. Jan 28 02:01:42.902125 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:37492.service - OpenSSH per-connection server daemon (10.0.0.1:37492). Jan 28 02:01:52.849522 systemd[1]: cri-containerd-e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06.scope: Deactivated successfully. Jan 28 02:01:52.863460 systemd[1]: cri-containerd-e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06.scope: Consumed 8.538s CPU time. Jan 28 02:01:53.080400 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 37492 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:53.084704 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:53.157744 systemd-logind[1471]: New session 19 of user core. Jan 28 02:01:53.260245 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 02:01:53.588366 kubelet[2699]: E0128 02:01:53.586692 2699 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Jan 28 02:01:53.877742 kubelet[2699]: E0128 02:01:53.877185 2699 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Jan 28 02:01:54.117766 kubelet[2699]: E0128 02:01:54.112257 2699 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.212s" Jan 28 02:01:54.146781 kubelet[2699]: E0128 02:01:54.131438 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:54.741369 systemd[1]: cri-containerd-abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d.scope: Deactivated successfully. Jan 28 02:01:54.743367 systemd[1]: cri-containerd-abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d.scope: Consumed 8.192s CPU time. Jan 28 02:01:55.004308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06-rootfs.mount: Deactivated successfully. Jan 28 02:01:55.102403 sshd[4427]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:55.132291 containerd[1483]: time="2026-01-28T02:01:55.130879730Z" level=info msg="shim disconnected" id=e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06 namespace=k8s.io Jan 28 02:01:55.132291 containerd[1483]: time="2026-01-28T02:01:55.131154652Z" level=warning msg="cleaning up after shim disconnected" id=e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06 namespace=k8s.io Jan 28 02:01:55.132291 containerd[1483]: time="2026-01-28T02:01:55.131167776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:01:55.134510 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:37492.service: Deactivated successfully. Jan 28 02:01:55.139062 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:37492.service: Consumed 1.872s CPU time. Jan 28 02:01:55.149718 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 02:01:55.159140 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Jan 28 02:01:55.190270 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:42764.service - OpenSSH per-connection server daemon (10.0.0.1:42764). Jan 28 02:01:55.199908 systemd-logind[1471]: Removed session 19. Jan 28 02:01:55.236734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d-rootfs.mount: Deactivated successfully. Jan 28 02:01:55.338693 containerd[1483]: time="2026-01-28T02:01:55.338313927Z" level=info msg="shim disconnected" id=abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d namespace=k8s.io Jan 28 02:01:55.338693 containerd[1483]: time="2026-01-28T02:01:55.338394366Z" level=warning msg="cleaning up after shim disconnected" id=abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d namespace=k8s.io Jan 28 02:01:55.338693 containerd[1483]: time="2026-01-28T02:01:55.338409363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:01:55.441339 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 42764 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:55.461415 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:55.532121 systemd-logind[1471]: New session 20 of user core. Jan 28 02:01:55.593330 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 02:01:55.971384 kubelet[2699]: I0128 02:01:55.962988 2699 scope.go:117] "RemoveContainer" containerID="e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06" Jan 28 02:01:55.971384 kubelet[2699]: E0128 02:01:55.963098 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:55.971384 kubelet[2699]: E0128 02:01:55.963215 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 28 02:01:55.971384 kubelet[2699]: I0128 02:01:55.964497 2699 scope.go:117] "RemoveContainer" containerID="d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542" Jan 28 02:01:55.983502 containerd[1483]: time="2026-01-28T02:01:55.978737842Z" level=info msg="RemoveContainer for \"d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542\"" Jan 28 02:01:56.020381 kubelet[2699]: I0128 02:01:56.020340 2699 scope.go:117] "RemoveContainer" containerID="abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d" Jan 28 02:01:56.026456 kubelet[2699]: E0128 02:01:56.026425 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:01:56.028055 kubelet[2699]: E0128 02:01:56.028023 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 28 02:01:56.050606 containerd[1483]: time="2026-01-28T02:01:56.045444585Z" level=info msg="RemoveContainer for \"d76bd4c6c0cfb3a45b46735bcf114d4e09cec6c535dad361bf12f042fe0c9542\" returns successfully" Jan 28 02:01:56.052327 kubelet[2699]: I0128 02:01:56.051468 2699 scope.go:117] "RemoveContainer" containerID="3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118" Jan 28 02:01:56.075924 containerd[1483]: time="2026-01-28T02:01:56.074970226Z" level=info msg="RemoveContainer for \"3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118\"" Jan 28 02:01:56.139739 containerd[1483]: time="2026-01-28T02:01:56.139360306Z" level=info msg="RemoveContainer for \"3b3eb16eb3d4fecbde5f7031ee4be591324b934fd8ba14e7dff44a0590bb2118\" returns successfully" Jan 28 02:01:56.667024 sshd[4488]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:56.710120 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:42764.service: Deactivated successfully. Jan 28 02:01:56.721201 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 02:01:56.736687 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Jan 28 02:01:56.784361 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:42768.service - OpenSSH per-connection server daemon (10.0.0.1:42768). Jan 28 02:01:56.789169 systemd-logind[1471]: Removed session 20. Jan 28 02:01:56.962136 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 42768 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:01:56.968519 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:01:57.002117 systemd-logind[1471]: New session 21 of user core. Jan 28 02:01:57.036261 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 02:01:57.696423 sshd[4526]: pam_unix(sshd:session): session closed for user core Jan 28 02:01:57.722097 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:42768.service: Deactivated successfully. Jan 28 02:01:57.729044 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 02:01:57.739280 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Jan 28 02:01:57.752286 systemd-logind[1471]: Removed session 21. Jan 28 02:02:02.799137 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:46940.service - OpenSSH per-connection server daemon (10.0.0.1:46940). Jan 28 02:02:02.965357 kubelet[2699]: I0128 02:02:02.962125 2699 scope.go:117] "RemoveContainer" containerID="e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06" Jan 28 02:02:02.965357 kubelet[2699]: E0128 02:02:02.962352 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:02.965357 kubelet[2699]: E0128 02:02:02.962508 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 28 02:02:03.007880 sshd[4563]: Accepted publickey for core from 10.0.0.1 port 46940 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:03.027727 sshd[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:03.069038 systemd-logind[1471]: New session 22 of user core. Jan 28 02:02:03.094281 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 02:02:03.170293 kubelet[2699]: I0128 02:02:03.168384 2699 scope.go:117] "RemoveContainer" containerID="abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d" Jan 28 02:02:03.176048 kubelet[2699]: E0128 02:02:03.174319 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:03.176048 kubelet[2699]: E0128 02:02:03.174479 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(07ca0cbf79ad6ba9473d8e9f7715e571)\"" pod="kube-system/kube-scheduler-localhost" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" Jan 28 02:02:03.782340 sshd[4563]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:03.793271 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:46940.service: Deactivated successfully. Jan 28 02:02:03.810728 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 02:02:03.826208 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Jan 28 02:02:03.833046 systemd-logind[1471]: Removed session 22. Jan 28 02:02:08.839429 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Jan 28 02:02:09.098918 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:09.117071 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:09.169055 systemd-logind[1471]: New session 23 of user core. Jan 28 02:02:09.181960 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 02:02:10.022728 sshd[4598]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:10.061111 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:46954.service: Deactivated successfully. Jan 28 02:02:10.079432 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 02:02:10.090956 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Jan 28 02:02:10.095125 systemd-logind[1471]: Removed session 23. Jan 28 02:02:15.154332 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:41082.service - OpenSSH per-connection server daemon (10.0.0.1:41082). Jan 28 02:02:15.312873 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 41082 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:15.317953 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:15.354350 systemd-logind[1471]: New session 24 of user core. Jan 28 02:02:15.368278 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 02:02:16.236395 sshd[4655]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:16.257335 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:41082.service: Deactivated successfully. Jan 28 02:02:16.266291 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 02:02:16.304313 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Jan 28 02:02:16.321943 systemd-logind[1471]: Removed session 24. Jan 28 02:02:16.899402 kubelet[2699]: I0128 02:02:16.896170 2699 scope.go:117] "RemoveContainer" containerID="e2be8a6f858d29186da7e7e5020531e0821495f8cfae2e5c33736ad1d4cd9f06" Jan 28 02:02:16.899402 kubelet[2699]: E0128 02:02:16.896289 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:16.933156 containerd[1483]: time="2026-01-28T02:02:16.932946562Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Jan 28 02:02:17.047838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153746844.mount: Deactivated successfully. Jan 28 02:02:17.105482 containerd[1483]: time="2026-01-28T02:02:17.105416375Z" level=info msg="CreateContainer within sandbox \"e23f944c753a7d202ea68498b517dc32f046e6fe21024e7529366f6a94efc931\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"8e241a848f9e2880683e4ccff1fe32d9c087f5bb28ea38536b14b41f7eb6ee3e\"" Jan 28 02:02:17.118360 containerd[1483]: time="2026-01-28T02:02:17.115332821Z" level=info msg="StartContainer for \"8e241a848f9e2880683e4ccff1fe32d9c087f5bb28ea38536b14b41f7eb6ee3e\"" Jan 28 02:02:17.419913 systemd[1]: Started cri-containerd-8e241a848f9e2880683e4ccff1fe32d9c087f5bb28ea38536b14b41f7eb6ee3e.scope - libcontainer container 8e241a848f9e2880683e4ccff1fe32d9c087f5bb28ea38536b14b41f7eb6ee3e. Jan 28 02:02:17.897252 kubelet[2699]: I0128 02:02:17.892521 2699 scope.go:117] "RemoveContainer" containerID="abcfab81fe7f58dde69c300dca9a46c14f510cdbb1857a3305fc0835cc1e256d" Jan 28 02:02:17.897252 kubelet[2699]: E0128 02:02:17.892974 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:17.913926 containerd[1483]: time="2026-01-28T02:02:17.913461113Z" level=info msg="StartContainer for \"8e241a848f9e2880683e4ccff1fe32d9c087f5bb28ea38536b14b41f7eb6ee3e\" returns successfully" Jan 28 02:02:17.969717 containerd[1483]: time="2026-01-28T02:02:17.963233136Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Jan 28 02:02:18.197014 containerd[1483]: time="2026-01-28T02:02:18.191423064Z" level=info msg="CreateContainer within sandbox \"01a18f178dc8a61237a6ae38629d4d6534ce213d31ac0caf588748b9621251eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831\"" Jan 28 02:02:18.206904 containerd[1483]: time="2026-01-28T02:02:18.201922771Z" level=info msg="StartContainer for \"7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831\"" Jan 28 02:02:18.467164 systemd[1]: run-containerd-runc-k8s.io-7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831-runc.b86fhe.mount: Deactivated successfully. Jan 28 02:02:18.523367 systemd[1]: Started cri-containerd-7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831.scope - libcontainer container 7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831. Jan 28 02:02:18.664927 kubelet[2699]: E0128 02:02:18.663178 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:18.920446 containerd[1483]: time="2026-01-28T02:02:18.920301585Z" level=info msg="StartContainer for \"7daca65d6728d53967541a4f4e1d9e6b2b61d4440ac8facc600cf837bbf0c831\" returns successfully" Jan 28 02:02:19.687693 kubelet[2699]: E0128 02:02:19.680329 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:20.379695 kubelet[2699]: E0128 02:02:20.379237 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:20.708345 kubelet[2699]: E0128 02:02:20.707976 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:21.308516 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:41090.service - OpenSSH per-connection server daemon (10.0.0.1:41090). Jan 28 02:02:21.582718 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 41090 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:21.612342 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:21.650301 systemd-logind[1471]: New session 25 of user core. Jan 28 02:02:21.692709 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 02:02:21.773985 kubelet[2699]: E0128 02:02:21.773726 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:22.197889 sshd[4766]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:22.203847 systemd-logind[1471]: Session 25 logged out. Waiting for processes to exit. Jan 28 02:02:22.207103 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:41090.service: Deactivated successfully. Jan 28 02:02:22.212200 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 02:02:22.219468 systemd-logind[1471]: Removed session 25. Jan 28 02:02:27.323307 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:43652.service - OpenSSH per-connection server daemon (10.0.0.1:43652). Jan 28 02:02:27.442018 sshd[4803]: Accepted publickey for core from 10.0.0.1 port 43652 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:27.454179 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:27.482062 systemd-logind[1471]: New session 26 of user core. Jan 28 02:02:27.499981 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 02:02:28.039850 sshd[4803]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:28.063946 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:43652.service: Deactivated successfully. Jan 28 02:02:28.069972 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 02:02:28.072054 systemd-logind[1471]: Session 26 logged out. Waiting for processes to exit. Jan 28 02:02:28.091914 systemd-logind[1471]: Removed session 26. Jan 28 02:02:29.899995 kubelet[2699]: E0128 02:02:29.893043 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:30.319277 kubelet[2699]: E0128 02:02:30.319128 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:30.464072 kubelet[2699]: E0128 02:02:30.460194 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:30.921697 kubelet[2699]: E0128 02:02:30.921300 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:33.129325 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:48220.service - OpenSSH per-connection server daemon (10.0.0.1:48220). Jan 28 02:02:33.394099 sshd[4839]: Accepted publickey for core from 10.0.0.1 port 48220 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:33.405512 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:33.476130 systemd-logind[1471]: New session 27 of user core. Jan 28 02:02:33.490336 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 02:02:34.653380 sshd[4839]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:34.677009 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:48220.service: Deactivated successfully. Jan 28 02:02:34.686375 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 02:02:34.719000 systemd-logind[1471]: Session 27 logged out. Waiting for processes to exit. Jan 28 02:02:34.728429 systemd-logind[1471]: Removed session 27. Jan 28 02:02:39.680136 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:48234.service - OpenSSH per-connection server daemon (10.0.0.1:48234). Jan 28 02:02:39.815689 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 48234 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:39.824133 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:39.862290 systemd-logind[1471]: New session 28 of user core. Jan 28 02:02:39.884088 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 02:02:40.419189 sshd[4880]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:40.449488 systemd-logind[1471]: Session 28 logged out. Waiting for processes to exit. Jan 28 02:02:40.451215 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:48234.service: Deactivated successfully. Jan 28 02:02:40.460226 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 02:02:40.476857 systemd-logind[1471]: Removed session 28. Jan 28 02:02:40.929442 kubelet[2699]: E0128 02:02:40.924492 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:43.899406 kubelet[2699]: E0128 02:02:43.896033 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:45.467351 systemd[1]: Started sshd@28-10.0.0.134:22-10.0.0.1:43668.service - OpenSSH per-connection server daemon (10.0.0.1:43668). Jan 28 02:02:45.664178 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 43668 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:45.671172 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:45.705156 systemd-logind[1471]: New session 29 of user core. Jan 28 02:02:45.729184 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 02:02:45.894130 kubelet[2699]: E0128 02:02:45.893282 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:45.896379 kubelet[2699]: E0128 02:02:45.894429 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:02:46.226003 sshd[4915]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:46.241070 systemd[1]: sshd@28-10.0.0.134:22-10.0.0.1:43668.service: Deactivated successfully. Jan 28 02:02:46.252189 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 02:02:46.258128 systemd-logind[1471]: Session 29 logged out. Waiting for processes to exit. Jan 28 02:02:46.264134 systemd-logind[1471]: Removed session 29. Jan 28 02:02:51.292381 systemd[1]: Started sshd@29-10.0.0.134:22-10.0.0.1:43678.service - OpenSSH per-connection server daemon (10.0.0.1:43678). Jan 28 02:02:51.456327 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:51.462400 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:51.489072 systemd-logind[1471]: New session 30 of user core. Jan 28 02:02:51.508051 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 02:02:52.064929 sshd[4964]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:52.084301 systemd[1]: sshd@29-10.0.0.134:22-10.0.0.1:43678.service: Deactivated successfully. Jan 28 02:02:52.099033 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 02:02:52.104004 systemd-logind[1471]: Session 30 logged out. Waiting for processes to exit. Jan 28 02:02:52.115190 systemd-logind[1471]: Removed session 30. Jan 28 02:02:57.167302 systemd[1]: Started sshd@30-10.0.0.134:22-10.0.0.1:43644.service - OpenSSH per-connection server daemon (10.0.0.1:43644). Jan 28 02:02:57.303291 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 43644 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:02:57.307017 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:02:57.331351 systemd-logind[1471]: New session 31 of user core. Jan 28 02:02:57.366463 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 02:02:58.053977 sshd[5000]: pam_unix(sshd:session): session closed for user core Jan 28 02:02:58.084241 systemd[1]: sshd@30-10.0.0.134:22-10.0.0.1:43644.service: Deactivated successfully. Jan 28 02:02:58.095982 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 02:02:58.106491 systemd-logind[1471]: Session 31 logged out. Waiting for processes to exit. Jan 28 02:02:58.120892 systemd-logind[1471]: Removed session 31. Jan 28 02:03:03.128156 systemd[1]: Started sshd@31-10.0.0.134:22-10.0.0.1:43912.service - OpenSSH per-connection server daemon (10.0.0.1:43912). Jan 28 02:03:03.212099 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 43912 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:03.215867 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:03.257197 systemd-logind[1471]: New session 32 of user core. Jan 28 02:03:03.273919 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 02:03:03.749055 sshd[5036]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:03.780059 systemd[1]: sshd@31-10.0.0.134:22-10.0.0.1:43912.service: Deactivated successfully. Jan 28 02:03:03.787278 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 02:03:03.802969 systemd-logind[1471]: Session 32 logged out. Waiting for processes to exit. Jan 28 02:03:03.835427 systemd[1]: Started sshd@32-10.0.0.134:22-10.0.0.1:43928.service - OpenSSH per-connection server daemon (10.0.0.1:43928). Jan 28 02:03:03.855263 systemd-logind[1471]: Removed session 32. Jan 28 02:03:03.970938 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 43928 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:03.983377 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:04.020059 systemd-logind[1471]: New session 33 of user core. Jan 28 02:03:04.044422 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 02:03:06.060049 sshd[5050]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:06.082372 systemd[1]: sshd@32-10.0.0.134:22-10.0.0.1:43928.service: Deactivated successfully. Jan 28 02:03:06.100503 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 02:03:06.101830 systemd[1]: session-33.scope: Consumed 1.279s CPU time. Jan 28 02:03:06.107519 systemd-logind[1471]: Session 33 logged out. Waiting for processes to exit. Jan 28 02:03:06.137955 systemd[1]: Started sshd@33-10.0.0.134:22-10.0.0.1:43936.service - OpenSSH per-connection server daemon (10.0.0.1:43936). Jan 28 02:03:06.145459 systemd-logind[1471]: Removed session 33. Jan 28 02:03:06.292356 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 43936 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:06.307420 sshd[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:06.357026 systemd-logind[1471]: New session 34 of user core. Jan 28 02:03:06.373053 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 02:03:09.689336 sshd[5068]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:09.722369 systemd[1]: sshd@33-10.0.0.134:22-10.0.0.1:43936.service: Deactivated successfully. Jan 28 02:03:09.749216 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 02:03:09.754493 systemd[1]: session-34.scope: Consumed 2.338s CPU time. Jan 28 02:03:09.763806 systemd-logind[1471]: Session 34 logged out. Waiting for processes to exit. Jan 28 02:03:09.797833 systemd[1]: Started sshd@34-10.0.0.134:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). Jan 28 02:03:09.807446 systemd-logind[1471]: Removed session 34. Jan 28 02:03:10.026242 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:10.036499 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:10.083110 systemd-logind[1471]: New session 35 of user core. Jan 28 02:03:10.106238 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 02:03:11.109329 sshd[5103]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:11.146161 systemd[1]: sshd@34-10.0.0.134:22-10.0.0.1:43946.service: Deactivated successfully. Jan 28 02:03:11.151336 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 02:03:11.166331 systemd-logind[1471]: Session 35 logged out. Waiting for processes to exit. Jan 28 02:03:11.201947 systemd[1]: Started sshd@35-10.0.0.134:22-10.0.0.1:43954.service - OpenSSH per-connection server daemon (10.0.0.1:43954). Jan 28 02:03:11.209144 systemd-logind[1471]: Removed session 35. Jan 28 02:03:11.333486 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 43954 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:11.336278 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:11.382492 systemd-logind[1471]: New session 36 of user core. Jan 28 02:03:11.419193 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 02:03:12.075083 sshd[5125]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:12.093516 systemd[1]: sshd@35-10.0.0.134:22-10.0.0.1:43954.service: Deactivated successfully. Jan 28 02:03:12.098511 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 02:03:12.104256 systemd-logind[1471]: Session 36 logged out. Waiting for processes to exit. Jan 28 02:03:12.108966 systemd-logind[1471]: Removed session 36. Jan 28 02:03:17.141425 systemd[1]: Started sshd@36-10.0.0.134:22-10.0.0.1:53748.service - OpenSSH per-connection server daemon (10.0.0.1:53748). Jan 28 02:03:17.257367 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 53748 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:17.264309 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:17.287051 systemd-logind[1471]: New session 37 of user core. Jan 28 02:03:17.306154 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 02:03:17.929256 sshd[5171]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:17.944082 systemd[1]: sshd@36-10.0.0.134:22-10.0.0.1:53748.service: Deactivated successfully. Jan 28 02:03:17.958146 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 02:03:17.966806 systemd-logind[1471]: Session 37 logged out. Waiting for processes to exit. Jan 28 02:03:17.975238 systemd-logind[1471]: Removed session 37. Jan 28 02:03:23.040264 systemd[1]: Started sshd@37-10.0.0.134:22-10.0.0.1:52236.service - OpenSSH per-connection server daemon (10.0.0.1:52236). Jan 28 02:03:23.227449 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 52236 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:23.238027 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:23.263944 systemd-logind[1471]: New session 38 of user core. Jan 28 02:03:23.291201 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 02:03:23.998317 sshd[5211]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:24.019758 systemd[1]: sshd@37-10.0.0.134:22-10.0.0.1:52236.service: Deactivated successfully. Jan 28 02:03:24.024239 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 02:03:24.031110 systemd-logind[1471]: Session 38 logged out. Waiting for processes to exit. Jan 28 02:03:24.048274 systemd-logind[1471]: Removed session 38. Jan 28 02:03:29.071230 systemd[1]: Started sshd@38-10.0.0.134:22-10.0.0.1:52240.service - OpenSSH per-connection server daemon (10.0.0.1:52240). Jan 28 02:03:29.167009 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 52240 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:29.173767 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:29.209938 systemd-logind[1471]: New session 39 of user core. Jan 28 02:03:29.223463 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 02:03:29.626447 sshd[5249]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:29.641367 systemd[1]: sshd@38-10.0.0.134:22-10.0.0.1:52240.service: Deactivated successfully. Jan 28 02:03:29.658804 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 02:03:29.664162 systemd-logind[1471]: Session 39 logged out. Waiting for processes to exit. Jan 28 02:03:29.673517 systemd-logind[1471]: Removed session 39. Jan 28 02:03:34.704385 systemd[1]: Started sshd@39-10.0.0.134:22-10.0.0.1:37976.service - OpenSSH per-connection server daemon (10.0.0.1:37976). Jan 28 02:03:34.887454 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 37976 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:34.896349 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:34.936883 systemd-logind[1471]: New session 40 of user core. Jan 28 02:03:34.961751 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 02:03:35.450261 sshd[5285]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:35.476061 systemd[1]: sshd@39-10.0.0.134:22-10.0.0.1:37976.service: Deactivated successfully. Jan 28 02:03:35.489923 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 02:03:35.501316 systemd-logind[1471]: Session 40 logged out. Waiting for processes to exit. Jan 28 02:03:35.516264 systemd-logind[1471]: Removed session 40. Jan 28 02:03:40.498495 systemd[1]: Started sshd@40-10.0.0.134:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). Jan 28 02:03:40.592266 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:40.600483 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:40.626092 systemd-logind[1471]: New session 41 of user core. Jan 28 02:03:40.646400 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 02:03:41.131098 sshd[5319]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:41.142000 systemd[1]: sshd@40-10.0.0.134:22-10.0.0.1:37990.service: Deactivated successfully. Jan 28 02:03:41.157827 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 02:03:41.161820 systemd-logind[1471]: Session 41 logged out. Waiting for processes to exit. Jan 28 02:03:41.174130 systemd-logind[1471]: Removed session 41. Jan 28 02:03:42.905000 kubelet[2699]: E0128 02:03:42.902013 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 02:03:46.174169 systemd[1]: Started sshd@41-10.0.0.134:22-10.0.0.1:40238.service - OpenSSH per-connection server daemon (10.0.0.1:40238). Jan 28 02:03:46.267293 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 40238 ssh2: RSA SHA256:MPQM3+j9DObzqK8Xjg4BgCEyvM6GdE6724kK6kuULtc Jan 28 02:03:46.271271 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:03:46.295349 systemd-logind[1471]: New session 42 of user core. Jan 28 02:03:46.307975 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 02:03:46.798222 sshd[5359]: pam_unix(sshd:session): session closed for user core Jan 28 02:03:46.827489 systemd[1]: sshd@41-10.0.0.134:22-10.0.0.1:40238.service: Deactivated successfully. Jan 28 02:03:46.843403 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 02:03:46.858480 systemd-logind[1471]: Session 42 logged out. Waiting for processes to exit. Jan 28 02:03:46.867865 systemd-logind[1471]: Removed session 42. Jan 28 02:03:47.894460 kubelet[2699]: E0128 02:03:47.892453 2699 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"