Jan 17 00:23:13.834755 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:23:13.834883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:23:13.834956 kernel: BIOS-provided physical RAM map: Jan 17 00:23:13.834966 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:23:13.834974 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:23:13.834982 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:23:13.834992 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:23:13.835001 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:23:13.835009 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:23:13.835017 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:23:13.835030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:23:13.835039 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:23:13.835075 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:23:13.835085 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:23:13.835122 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:23:13.835132 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:23:13.835145 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:23:13.835154 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:23:13.835163 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:23:13.835172 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:23:13.835181 kernel: NX (Execute Disable) protection: active Jan 17 00:23:13.835190 kernel: APIC: Static calls initialized Jan 17 00:23:13.835199 kernel: efi: EFI v2.7 by EDK II Jan 17 00:23:13.835208 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:23:13.835217 kernel: SMBIOS 2.8 present. Jan 17 00:23:13.835226 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:23:13.835235 kernel: Hypervisor detected: KVM Jan 17 00:23:13.835248 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:23:13.835257 kernel: kvm-clock: using sched offset of 13785985891 cycles Jan 17 00:23:13.835266 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:23:13.835276 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:23:13.835285 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:23:13.835295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:23:13.835304 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:23:13.835314 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:23:13.835323 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:23:13.835336 kernel: Using GB pages for direct mapping Jan 17 00:23:13.835345 kernel: Secure boot disabled Jan 17 00:23:13.835355 kernel: ACPI: Early table checksum verification disabled Jan 17 00:23:13.835411 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:23:13.835428 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:23:13.835438 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835448 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835462 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:23:13.835472 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835512 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835522 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835532 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:23:13.835542 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:23:13.835552 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:23:13.835565 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:23:13.835576 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:23:13.835586 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:23:13.835596 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:23:13.835605 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:23:13.835615 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:23:13.835625 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:23:13.835635 kernel: No NUMA configuration found Jan 17 00:23:13.835669 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:23:13.835683 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:23:13.835693 kernel: Zone ranges: Jan 17 00:23:13.835703 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:23:13.835713 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:23:13.835723 kernel: Normal empty Jan 17 00:23:13.835732 kernel: Movable zone start for each node Jan 17 00:23:13.835742 kernel: Early memory node ranges Jan 17 00:23:13.835751 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:23:13.835761 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:23:13.835774 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:23:13.835784 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:23:13.835794 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:23:13.835803 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:23:13.835837 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:23:13.835848 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:23:13.835858 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:23:13.835868 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:23:13.835877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:23:13.835887 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:23:13.835981 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:23:13.835991 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:23:13.836001 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:23:13.836011 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:23:13.836021 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:23:13.836030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:23:13.836040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:23:13.836050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:23:13.836059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:23:13.836073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:23:13.836083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:23:13.836093 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:23:13.836102 kernel: TSC deadline timer available Jan 17 00:23:13.836112 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:23:13.836122 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:23:13.836132 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:23:13.836141 kernel: kvm-guest: setup PV sched yield Jan 17 00:23:13.836151 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:23:13.836164 kernel: Booting paravirtualized kernel on KVM Jan 17 00:23:13.836175 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:23:13.836185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:23:13.836194 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:23:13.836204 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:23:13.836214 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:23:13.836223 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:23:13.836233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:23:13.836244 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:23:13.836284 kernel: random: crng init done Jan 17 00:23:13.836295 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:23:13.836305 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:23:13.836315 kernel: Fallback order for Node 0: 0 Jan 17 00:23:13.836324 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:23:13.836334 kernel: Policy zone: DMA32 Jan 17 00:23:13.836344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:23:13.836354 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:23:13.836412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:23:13.836423 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:23:13.836432 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:23:13.836442 kernel: Dynamic Preempt: voluntary Jan 17 00:23:13.836452 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:23:13.836476 kernel: rcu: RCU event tracing is enabled. Jan 17 00:23:13.836490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:23:13.836501 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:23:13.836511 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:23:13.836522 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:23:13.836532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:23:13.836542 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:23:13.836556 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:23:13.836566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:23:13.836577 kernel: Console: colour dummy device 80x25 Jan 17 00:23:13.836587 kernel: printk: console [ttyS0] enabled Jan 17 00:23:13.836632 kernel: ACPI: Core revision 20230628 Jan 17 00:23:13.836647 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:23:13.836658 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:23:13.836668 kernel: x2apic enabled Jan 17 00:23:13.836678 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:23:13.836689 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:23:13.836699 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:23:13.836710 kernel: kvm-guest: setup PV IPIs Jan 17 00:23:13.836720 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:23:13.836731 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:23:13.836744 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:23:13.836755 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:23:13.836765 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:23:13.836775 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:23:13.836786 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:23:13.836796 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:23:13.836806 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:23:13.836817 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:23:13.836827 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:23:13.836841 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:23:13.836852 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:23:13.836862 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:23:13.836872 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:23:13.836957 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:23:13.836969 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:23:13.836980 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:23:13.836990 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:23:13.837005 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:23:13.837015 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:23:13.837026 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:23:13.837036 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:23:13.837046 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:23:13.837057 kernel: landlock: Up and running. Jan 17 00:23:13.837067 kernel: SELinux: Initializing. Jan 17 00:23:13.837077 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:23:13.837088 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:23:13.837102 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:23:13.837112 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:23:13.837123 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:23:13.837133 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:23:13.837144 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:23:13.837154 kernel: signal: max sigframe size: 1776 Jan 17 00:23:13.837164 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:23:13.837175 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:23:13.837185 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:23:13.837199 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:23:13.837209 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:23:13.837219 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:23:13.837229 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:23:13.837240 kernel: smpboot: Max logical packages: 1 Jan 17 00:23:13.837250 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:23:13.837260 kernel: devtmpfs: initialized Jan 17 00:23:13.837271 kernel: x86/mm: Memory block size: 128MB Jan 17 00:23:13.837281 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:23:13.837295 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:23:13.837305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:23:13.837316 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:23:13.837326 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:23:13.837337 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:23:13.837347 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:23:13.837357 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:23:13.837411 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:23:13.837422 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:23:13.837437 kernel: audit: type=2000 audit(1768609386.834:1): state=initialized audit_enabled=0 res=1 Jan 17 00:23:13.837447 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:23:13.837457 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:23:13.837468 kernel: cpuidle: using governor menu Jan 17 00:23:13.837478 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:23:13.837488 kernel: dca service started, version 1.12.1 Jan 17 00:23:13.837499 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:23:13.837509 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:23:13.837523 kernel: PCI: Using configuration type 1 for base access Jan 17 00:23:13.837533 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:23:13.837544 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:23:13.837554 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:23:13.837565 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:23:13.837576 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:23:13.837586 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:23:13.837596 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:23:13.837606 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:23:13.837620 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:23:13.837631 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:23:13.837641 kernel: ACPI: Interpreter enabled Jan 17 00:23:13.837651 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:23:13.837661 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:23:13.837672 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:23:13.837682 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:23:13.837692 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:23:13.837703 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:23:13.838331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:23:13.838570 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:23:13.838731 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:23:13.838745 kernel: PCI host bridge to bus 0000:00 Jan 17 00:23:13.839300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:23:13.839516 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:23:13.839707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:23:13.839861 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:23:13.840115 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:23:13.840685 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:23:13.840836 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:23:13.841290 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:23:13.842039 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:23:13.842212 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:23:13.842430 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:23:13.842593 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:23:13.842747 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:23:13.842969 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:23:13.843214 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:23:13.843703 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:23:13.843882 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:23:13.844138 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:23:13.844355 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:23:13.844590 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:23:13.844749 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:23:13.844979 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:23:13.845307 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:23:13.845588 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:23:13.845747 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:23:13.845963 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:23:13.846131 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:23:13.846467 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:23:13.846636 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:23:13.847723 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:23:13.848013 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:23:13.848173 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:23:13.848846 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:23:13.849097 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:23:13.849114 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:23:13.849125 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:23:13.849135 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:23:13.849151 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:23:13.849162 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:23:13.849700 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:23:13.849713 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:23:13.849724 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:23:13.849734 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:23:13.849745 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:23:13.849755 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:23:13.849765 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:23:13.849805 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:23:13.849815 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:23:13.849826 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:23:13.849836 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:23:13.849876 kernel: iommu: Default domain type: Translated Jan 17 00:23:13.849887 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:23:13.849960 kernel: efivars: Registered efivars operations Jan 17 00:23:13.849971 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:23:13.849982 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:23:13.849996 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:23:13.850007 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:23:13.850017 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:23:13.850028 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:23:13.850231 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:23:13.851109 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:23:13.851270 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:23:13.851283 kernel: vgaarb: loaded Jan 17 00:23:13.851300 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:23:13.851311 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:23:13.851321 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:23:13.851332 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:23:13.851342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:23:13.851353 kernel: pnp: PnP ACPI init Jan 17 00:23:13.852216 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:23:13.852236 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:23:13.852247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:23:13.852291 kernel: NET: Registered PF_INET protocol family Jan 17 00:23:13.852302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:23:13.852312 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:23:13.852323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:23:13.852333 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:23:13.852344 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:23:13.852354 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:23:13.852416 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:23:13.852432 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:23:13.852443 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:23:13.852482 kernel: NET: Registered PF_XDP protocol family Jan 17 00:23:13.852833 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:23:13.853131 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:23:13.853431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:23:13.853681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:23:13.854140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:23:13.854305 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:23:13.854517 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:23:13.854665 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:23:13.854679 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:23:13.854690 kernel: Initialise system trusted keyrings Jan 17 00:23:13.854700 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:23:13.854711 kernel: Key type asymmetric registered Jan 17 00:23:13.854721 kernel: Asymmetric key parser 'x509' registered Jan 17 00:23:13.854732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:23:13.854747 kernel: io scheduler mq-deadline registered Jan 17 00:23:13.854758 kernel: io scheduler kyber registered Jan 17 00:23:13.854768 kernel: io scheduler bfq registered Jan 17 00:23:13.854779 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:23:13.854790 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:23:13.854800 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:23:13.854811 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:23:13.854821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:23:13.854831 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:23:13.854846 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:23:13.854856 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:23:13.854867 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:23:13.854877 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:23:13.855322 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:23:13.855534 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:23:13.855684 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:23:12 UTC (1768609392) Jan 17 00:23:13.855832 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:23:13.855852 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:23:13.855862 kernel: efifb: probing for efifb Jan 17 00:23:13.855873 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:23:13.855883 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:23:13.855894 kernel: efifb: scrolling: redraw Jan 17 00:23:13.855978 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:23:13.855988 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:23:13.855999 kernel: fb0: EFI VGA frame buffer device Jan 17 00:23:13.856009 kernel: pstore: Using crash dump compression: deflate Jan 17 00:23:13.856024 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:23:13.856034 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:23:13.856044 kernel: Segment Routing with IPv6 Jan 17 00:23:13.856055 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:23:13.856065 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:23:13.856075 kernel: Key type dns_resolver registered Jan 17 00:23:13.856086 kernel: IPI shorthand broadcast: enabled Jan 17 00:23:13.856119 kernel: sched_clock: Marking stable (4972069642, 902276493)->(6801221214, -926875079) Jan 17 00:23:13.856134 kernel: registered taskstats version 1 Jan 17 00:23:13.856148 kernel: Loading compiled-in X.509 certificates Jan 17 00:23:13.856160 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:23:13.856171 kernel: Key type .fscrypt registered Jan 17 00:23:13.856181 kernel: Key type fscrypt-provisioning registered Jan 17 00:23:13.856192 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:23:13.856203 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:23:13.856213 kernel: ima: No architecture policies found Jan 17 00:23:13.856224 kernel: clk: Disabling unused clocks Jan 17 00:23:13.856235 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:23:13.856249 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:23:13.856260 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:23:13.856271 kernel: Run /init as init process Jan 17 00:23:13.856281 kernel: with arguments: Jan 17 00:23:13.856292 kernel: /init Jan 17 00:23:13.856303 kernel: with environment: Jan 17 00:23:13.856313 kernel: HOME=/ Jan 17 00:23:13.856324 kernel: TERM=linux Jan 17 00:23:13.856404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:23:13.856426 systemd[1]: Detected virtualization kvm. Jan 17 00:23:13.856438 systemd[1]: Detected architecture x86-64. Jan 17 00:23:13.856449 systemd[1]: Running in initrd. Jan 17 00:23:13.856460 systemd[1]: No hostname configured, using default hostname. Jan 17 00:23:13.856471 systemd[1]: Hostname set to . Jan 17 00:23:13.856482 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:23:13.856497 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:23:13.856509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:23:13.856521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:23:13.856533 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:23:13.856544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:23:13.856556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:23:13.856571 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:23:13.856618 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:23:13.856630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:23:13.856641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:23:13.856653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:23:13.856665 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:23:13.856680 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:23:13.856692 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:23:13.856704 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:23:13.856716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:23:13.856727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:23:13.856738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:23:13.856750 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:23:13.856762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:23:13.856773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:23:13.856788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:23:13.856800 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:23:13.856811 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:23:13.856823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:23:13.856834 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:23:13.856846 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:23:13.856857 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:23:13.856868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:23:13.856879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:13.856975 systemd-journald[195]: Collecting audit messages is disabled. Jan 17 00:23:13.857002 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:23:13.857014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:23:13.857030 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:23:13.857042 systemd-journald[195]: Journal started Jan 17 00:23:13.857065 systemd-journald[195]: Runtime Journal (/run/log/journal/ec0fb7a1f8f04d479a33317fe7dd39a1) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:23:13.856748 systemd-modules-load[196]: Inserted module 'overlay' Jan 17 00:23:13.875972 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:23:13.878606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:13.921492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:23:13.943955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:23:13.948137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:23:13.961582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:23:13.982153 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:23:13.993743 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:23:13.995102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:23:14.016654 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 17 00:23:14.020622 kernel: Bridge firewalling registered Jan 17 00:23:14.020652 dracut-cmdline[221]: dracut-dracut-053 Jan 17 00:23:14.017993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:23:14.046200 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:23:14.036285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:23:14.078046 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:23:14.081116 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:23:14.124498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:23:14.142070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:23:14.159424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:23:14.253655 systemd-resolved[275]: Positive Trust Anchors: Jan 17 00:23:14.253823 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:23:14.253850 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:23:14.269836 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 17 00:23:14.310536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:23:14.333256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:23:14.346847 kernel: SCSI subsystem initialized Jan 17 00:23:14.358999 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:23:14.385160 kernel: iscsi: registered transport (tcp) Jan 17 00:23:14.426152 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:23:14.426231 kernel: QLogic iSCSI HBA Driver Jan 17 00:23:14.576446 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:23:14.607181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:23:14.704743 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:23:14.705701 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:23:14.705723 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:23:14.782072 kernel: raid6: avx2x4 gen() 22394 MB/s Jan 17 00:23:14.801457 kernel: raid6: avx2x2 gen() 22123 MB/s Jan 17 00:23:14.821765 kernel: raid6: avx2x1 gen() 14165 MB/s Jan 17 00:23:14.821845 kernel: raid6: using algorithm avx2x4 gen() 22394 MB/s Jan 17 00:23:14.842879 kernel: raid6: .... xor() 2526 MB/s, rmw enabled Jan 17 00:23:14.843078 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:23:14.880468 kernel: xor: automatically using best checksumming function avx Jan 17 00:23:15.286073 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:23:15.306331 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:23:15.325250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:23:15.352283 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 17 00:23:15.371527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:23:15.395202 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:23:15.431284 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 17 00:23:15.497483 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:23:15.538848 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:23:15.831475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:23:15.884494 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:23:16.229673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:23:16.258432 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:23:16.272549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:23:16.284286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:23:16.303041 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:23:16.303607 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:23:16.308311 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:23:16.338887 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:23:16.346539 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:23:16.365616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:23:16.383710 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:23:16.383743 kernel: GPT:9289727 != 19775487 Jan 17 00:23:16.383764 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:23:16.383782 kernel: GPT:9289727 != 19775487 Jan 17 00:23:16.383799 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:23:16.383827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:23:16.366004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:23:16.409257 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:23:16.481827 kernel: libata version 3.00 loaded. Jan 17 00:23:16.426309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:23:16.427029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:16.454844 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:16.560551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:16.598564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:23:16.602301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:16.674880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:16.795795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:16.887214 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:23:16.909223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:23:16.998818 kernel: AES CTR mode by8 optimization enabled Jan 17 00:23:17.007983 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:23:17.016157 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:23:17.073022 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:23:17.073068 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Jan 17 00:23:17.073098 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:23:17.073418 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:23:17.073697 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (468) Jan 17 00:23:17.073715 kernel: scsi host0: ahci Jan 17 00:23:17.053521 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:23:17.090564 kernel: scsi host1: ahci Jan 17 00:23:17.090980 kernel: scsi host2: ahci Jan 17 00:23:17.091239 kernel: scsi host3: ahci Jan 17 00:23:17.076655 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:23:17.155766 kernel: scsi host4: ahci Jan 17 00:23:17.171727 kernel: scsi host5: ahci Jan 17 00:23:17.172106 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:23:17.172126 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:23:17.172141 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:23:17.172155 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:23:17.172169 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:23:17.172183 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:23:17.103361 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:23:17.198627 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:23:17.221582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:23:17.274136 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:23:17.347497 disk-uuid[571]: Primary Header is updated. Jan 17 00:23:17.347497 disk-uuid[571]: Secondary Entries is updated. Jan 17 00:23:17.347497 disk-uuid[571]: Secondary Header is updated. Jan 17 00:23:17.383346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:23:17.408060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:23:17.487672 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:23:17.494119 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:23:17.508690 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:23:17.508740 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:23:17.560020 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:23:17.560099 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:23:17.560124 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:23:17.560139 kernel: ata3.00: applying bridge limits Jan 17 00:23:17.563983 kernel: ata3.00: configured for UDMA/100 Jan 17 00:23:17.585074 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:23:17.709726 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:23:17.710497 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:23:17.736997 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:23:18.414349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:23:18.421520 disk-uuid[572]: The operation has completed successfully. Jan 17 00:23:18.542122 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:23:18.543676 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:23:18.596327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:23:18.627873 sh[596]: Success Jan 17 00:23:18.762344 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:23:18.942313 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:23:18.989668 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:23:18.995179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:23:19.101515 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:23:19.101601 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:23:19.101619 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:23:19.112566 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:23:19.112670 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:23:19.195639 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:23:19.219439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:23:19.258368 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:23:19.281309 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:23:19.311007 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:23:19.311066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:23:19.318277 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:23:19.352182 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:23:19.378428 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:23:19.401535 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:23:19.440223 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:23:19.475025 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:23:19.726633 ignition[708]: Ignition 2.19.0 Jan 17 00:23:19.726743 ignition[708]: Stage: fetch-offline Jan 17 00:23:19.734689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:23:19.726809 ignition[708]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:19.726824 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:19.727146 ignition[708]: parsed url from cmdline: "" Jan 17 00:23:19.727152 ignition[708]: no config URL provided Jan 17 00:23:19.727160 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:23:19.727174 ignition[708]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:23:19.727283 ignition[708]: op(1): [started] loading QEMU firmware config module Jan 17 00:23:19.727290 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:23:19.790565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:23:19.807654 ignition[708]: op(1): [finished] loading QEMU firmware config module Jan 17 00:23:19.850419 systemd-networkd[784]: lo: Link UP Jan 17 00:23:19.850494 systemd-networkd[784]: lo: Gained carrier Jan 17 00:23:19.857093 systemd-networkd[784]: Enumeration completed Jan 17 00:23:19.861965 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:23:19.861971 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:23:19.864836 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:23:19.870080 systemd[1]: Reached target network.target - Network. Jan 17 00:23:19.899691 systemd-networkd[784]: eth0: Link UP Jan 17 00:23:19.899700 systemd-networkd[784]: eth0: Gained carrier Jan 17 00:23:19.899718 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:23:19.978166 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:23:20.236228 ignition[708]: parsing config with SHA512: 499a26439fc2b8cb057b5638bb3e0515b29bafc41acc01049fc85dcc1708942a7ad221f7c23c528dd77ed6f51ac40cbdb767af2ed7500c5705b8f17a86003390 Jan 17 00:23:20.244061 unknown[708]: fetched base config from "system" Jan 17 00:23:20.244308 unknown[708]: fetched user config from "qemu" Jan 17 00:23:20.245031 ignition[708]: fetch-offline: fetch-offline passed Jan 17 00:23:20.256416 systemd-resolved[275]: Detected conflict on linux IN A 10.0.0.56 Jan 17 00:23:20.245199 ignition[708]: Ignition finished successfully Jan 17 00:23:20.256433 systemd-resolved[275]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Jan 17 00:23:20.257293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:23:20.264691 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:23:20.287254 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:23:20.366206 ignition[789]: Ignition 2.19.0 Jan 17 00:23:20.366260 ignition[789]: Stage: kargs Jan 17 00:23:20.366553 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:20.366573 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:20.388483 ignition[789]: kargs: kargs passed Jan 17 00:23:20.388584 ignition[789]: Ignition finished successfully Jan 17 00:23:20.398635 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:23:20.417813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:23:20.467725 ignition[797]: Ignition 2.19.0 Jan 17 00:23:20.467781 ignition[797]: Stage: disks Jan 17 00:23:20.468178 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:20.468200 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:20.482729 ignition[797]: disks: disks passed Jan 17 00:23:20.482818 ignition[797]: Ignition finished successfully Jan 17 00:23:20.492740 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:23:20.505030 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:23:20.514824 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:23:20.530828 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:23:20.544120 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:23:20.551699 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:23:20.576741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:23:20.654615 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:23:20.678196 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:23:20.716732 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:23:21.240674 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:23:21.246508 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:23:21.258253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:23:21.292193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:23:21.312695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:23:21.341143 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Jan 17 00:23:21.330667 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:23:21.374345 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:23:21.374436 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:23:21.374470 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:23:21.330782 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:23:21.330814 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:23:21.355139 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:23:21.449443 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:23:21.476454 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:23:21.482631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:23:21.635086 systemd-networkd[784]: eth0: Gained IPv6LL Jan 17 00:23:22.215483 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:23:22.262627 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:23:22.306558 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:23:22.348117 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:23:22.905052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:23:22.987761 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:23:23.044983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:23:23.067990 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:23:23.108847 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:23:23.351500 ignition[929]: INFO : Ignition 2.19.0 Jan 17 00:23:23.351500 ignition[929]: INFO : Stage: mount Jan 17 00:23:23.351500 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:23.351500 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:23.351500 ignition[929]: INFO : mount: mount passed Jan 17 00:23:23.351500 ignition[929]: INFO : Ignition finished successfully Jan 17 00:23:23.353867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:23:23.362722 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:23:23.415479 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:23:23.445288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:23:23.505292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 17 00:23:23.522835 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:23:23.523784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:23:23.550722 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:23:23.594649 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:23:23.610377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:23:24.190582 ignition[961]: INFO : Ignition 2.19.0 Jan 17 00:23:24.190582 ignition[961]: INFO : Stage: files Jan 17 00:23:24.199094 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:24.199094 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:24.199094 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:23:24.199094 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:23:24.199094 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:23:24.251796 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:23:24.257237 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:23:24.263290 unknown[961]: wrote ssh authorized keys file for user: core Jan 17 00:23:24.268886 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:23:24.268886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:23:24.268886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:23:24.268886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:23:24.268886 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:23:24.553285 kernel: hrtimer: interrupt took 4034804 ns Jan 17 00:23:24.666135 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:23:25.261976 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:23:25.261976 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:23:25.261976 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:23:25.495816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:23:27.038944 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:23:27.047267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:23:27.319670 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:23:32.042725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 17 00:23:32.062706 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:23:32.298235 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:23:32.298235 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:23:32.298235 ignition[961]: INFO : files: files passed Jan 17 00:23:32.298235 ignition[961]: INFO : Ignition finished successfully Jan 17 00:23:32.701282 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:23:32.993323 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:23:33.041885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:23:33.074600 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:23:33.074774 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:23:33.097701 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:23:33.114240 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:23:33.114240 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:23:33.203446 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:23:33.202149 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:23:33.221151 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:23:33.263236 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:23:33.442105 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:23:33.442353 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:23:33.478119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:23:33.500404 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:23:33.532826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:23:33.604189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:23:33.698778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:23:33.757987 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:23:33.888711 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:23:33.889256 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:23:33.965593 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:23:33.990485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:23:33.991160 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:23:34.050572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:23:34.055154 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:23:34.074872 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:23:34.088236 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:23:34.097001 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:23:34.142981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:23:34.158115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:23:34.175547 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:23:34.221192 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:23:34.245288 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:23:34.254023 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:23:34.254403 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:23:34.273063 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:23:34.285227 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:23:34.329269 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:23:34.343717 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:23:34.357354 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:23:34.357628 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:23:34.392070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:23:34.392368 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:23:34.456019 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:23:34.460218 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:23:34.462353 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:23:34.470018 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:23:34.519548 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:23:34.533261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:23:34.533494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:23:34.549147 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:23:34.549339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:23:34.554159 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:23:34.554403 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:23:34.563814 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:23:34.568596 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:23:34.645225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:23:34.647802 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:23:34.648202 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:23:34.744734 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:23:34.763228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:23:34.763511 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:23:34.800694 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:23:34.801482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:23:34.905892 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:23:34.906180 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:23:34.949785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:23:34.994305 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:23:34.994590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:23:35.044968 ignition[1016]: INFO : Ignition 2.19.0 Jan 17 00:23:35.044968 ignition[1016]: INFO : Stage: umount Jan 17 00:23:35.044968 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:23:35.044968 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:23:35.076559 ignition[1016]: INFO : umount: umount passed Jan 17 00:23:35.076559 ignition[1016]: INFO : Ignition finished successfully Jan 17 00:23:35.065841 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:23:35.066189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:23:35.102335 systemd[1]: Stopped target network.target - Network. Jan 17 00:23:35.121124 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:23:35.121651 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:23:35.155115 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:23:35.155222 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:23:35.161806 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:23:35.165675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:23:35.168277 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:23:35.168374 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:23:35.192547 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:23:35.192675 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:23:35.222650 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:23:35.288387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:23:35.315457 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 17 00:23:35.341830 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:23:35.349046 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:23:35.401079 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:23:35.401568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:23:35.497198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:23:35.497305 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:23:35.547643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:23:35.569002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:23:35.569140 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:23:35.580258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:23:35.580404 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:23:35.598736 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:23:35.599362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:23:35.610140 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:23:35.610262 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:23:35.651544 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:23:35.722815 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:23:35.723390 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:23:35.756890 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:23:35.757124 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:23:35.822171 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:23:35.822402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:23:35.874056 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:23:35.874162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:23:35.896293 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:23:35.896407 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:23:35.961887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:23:35.962224 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:23:35.986838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:23:35.987198 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:23:36.054174 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:23:36.079755 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:23:36.080204 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:23:36.089029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:23:36.089196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:36.158116 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:23:36.158353 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:23:36.171240 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:23:36.203754 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:23:36.253280 systemd[1]: Switching root. Jan 17 00:23:36.321297 systemd-journald[195]: Journal stopped Jan 17 00:23:40.304209 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 17 00:23:40.304323 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:23:40.304345 kernel: SELinux: policy capability open_perms=1 Jan 17 00:23:40.304367 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:23:40.304390 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:23:40.304410 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:23:40.313845 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:23:40.313881 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:23:40.313948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:23:40.313966 kernel: audit: type=1403 audit(1768609416.988:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:23:40.313993 systemd[1]: Successfully loaded SELinux policy in 126.207ms. Jan 17 00:23:40.314020 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 72.219ms. Jan 17 00:23:40.314043 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:23:40.314060 systemd[1]: Detected virtualization kvm. Jan 17 00:23:40.314075 systemd[1]: Detected architecture x86-64. Jan 17 00:23:40.314091 systemd[1]: Detected first boot. Jan 17 00:23:40.314111 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:23:40.314130 zram_generator::config[1077]: No configuration found. Jan 17 00:23:40.314147 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:23:40.314164 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:23:40.314184 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:23:40.314202 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:23:40.314218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:23:40.314234 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:23:40.314250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:23:40.314266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:23:40.314283 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:23:40.314310 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:23:40.314330 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:23:40.314357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:23:40.314378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:23:40.314399 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:23:40.314420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:23:40.316526 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:23:40.316547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:23:40.316563 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:23:40.316581 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:23:40.316600 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:23:40.316633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:23:40.316654 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:23:40.316676 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:23:40.316694 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:23:40.316710 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:23:40.316726 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:23:40.316742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:23:40.316788 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:23:40.316811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:23:40.316835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:23:40.316851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:23:40.316867 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:23:40.316883 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:23:40.316949 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:23:40.316967 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:23:40.316983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:40.316999 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:23:40.317021 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:23:40.317037 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:23:40.317053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:23:40.317069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:23:40.317088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:23:40.317105 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:23:40.317123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:23:40.317143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:23:40.317161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:23:40.317189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:23:40.317208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:23:40.317229 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:23:40.317251 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:23:40.317272 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:23:40.317289 kernel: fuse: init (API version 7.39) Jan 17 00:23:40.317305 kernel: ACPI: bus type drm_connector registered Jan 17 00:23:40.317325 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:23:40.317342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:23:40.317358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:23:40.317374 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:23:40.317391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:23:40.317407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:40.317506 systemd-journald[1176]: Collecting audit messages is disabled. Jan 17 00:23:40.317541 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:23:40.317565 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:23:40.317616 systemd-journald[1176]: Journal started Jan 17 00:23:40.317649 systemd-journald[1176]: Runtime Journal (/run/log/journal/ec0fb7a1f8f04d479a33317fe7dd39a1) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:23:40.323668 kernel: loop: module loaded Jan 17 00:23:40.364028 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:23:40.383403 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:23:40.401090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:23:40.409130 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:23:40.430221 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:23:40.437825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:23:40.450810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:23:40.465873 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:23:40.466305 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:23:40.479196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:23:40.479619 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:23:40.491737 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:23:40.493240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:23:40.544139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:23:40.569887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:23:40.676065 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:23:40.677782 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:23:40.730124 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:23:40.731145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:23:40.743077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:23:40.756029 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:23:40.775704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:23:40.856386 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:23:40.886611 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:23:40.942108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:23:40.959734 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:23:40.969149 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:23:40.991352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:23:41.037646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:23:41.050146 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:23:41.057477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:23:41.062515 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:23:41.112347 systemd-journald[1176]: Time spent on flushing to /var/log/journal/ec0fb7a1f8f04d479a33317fe7dd39a1 is 78.484ms for 978 entries. Jan 17 00:23:41.112347 systemd-journald[1176]: System Journal (/var/log/journal/ec0fb7a1f8f04d479a33317fe7dd39a1) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:23:41.290180 systemd-journald[1176]: Received client request to flush runtime journal. Jan 17 00:23:41.112582 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:23:41.174303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:23:41.195370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:23:41.212225 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:23:41.233062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:23:41.244752 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:23:41.275258 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:23:41.295045 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:23:41.364603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:23:41.389679 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 17 00:23:41.390402 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 17 00:23:41.415991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:23:41.565801 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:23:41.586658 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:23:41.988800 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:23:42.030287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:23:42.433281 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 00:23:42.433327 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 00:23:42.460722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:23:43.962177 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:23:44.043159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:23:44.319065 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Jan 17 00:23:44.389498 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:23:44.622043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:23:44.688662 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:23:44.748813 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:23:45.704017 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:23:45.768252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:23:45.776101 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:23:45.815051 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1251) Jan 17 00:23:45.887256 systemd-networkd[1254]: lo: Link UP Jan 17 00:23:45.887275 systemd-networkd[1254]: lo: Gained carrier Jan 17 00:23:45.891054 systemd-networkd[1254]: Enumeration completed Jan 17 00:23:45.892503 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:23:45.892663 systemd-networkd[1254]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:23:45.894121 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:23:45.895555 systemd-networkd[1254]: eth0: Link UP Jan 17 00:23:45.895565 systemd-networkd[1254]: eth0: Gained carrier Jan 17 00:23:45.895599 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:23:45.946040 systemd-networkd[1254]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:23:45.960689 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:23:46.032357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:23:46.062405 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:23:46.063036 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:23:46.073074 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:23:46.073489 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:23:46.079208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:46.109490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:23:46.112747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:46.145421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:23:46.176879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:23:46.219163 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:23:47.074587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:23:47.096575 systemd-networkd[1254]: eth0: Gained IPv6LL Jan 17 00:23:47.111580 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:23:47.377580 kernel: kvm_amd: TSC scaling supported Jan 17 00:23:47.377767 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:23:47.377813 kernel: kvm_amd: Nested Paging enabled Jan 17 00:23:47.377856 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:23:47.378391 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:23:47.600597 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:23:47.649388 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:23:47.686123 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:23:48.006699 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:23:48.075420 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:23:48.084117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:23:48.102202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:23:48.129190 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:23:48.192194 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:23:48.212716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:23:48.238418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:23:48.238539 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:23:48.252148 systemd[1]: Reached target machines.target - Containers. Jan 17 00:23:48.263818 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:23:48.281201 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:23:48.413095 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:23:48.442362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:23:48.532648 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:23:48.578886 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:23:48.591788 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:23:48.604081 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:23:48.637998 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:23:48.668012 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:23:48.696344 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:23:48.701021 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:23:48.760231 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:23:48.811427 kernel: loop1: detected capacity change from 0 to 224512 Jan 17 00:23:49.000627 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:23:49.202002 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:23:49.282751 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 00:23:49.562127 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:23:49.707646 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:23:49.709288 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 17 00:23:49.721084 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:23:49.721593 systemd[1]: Reloading... Jan 17 00:23:49.865011 zram_generator::config[1346]: No configuration found. Jan 17 00:23:50.799106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:23:50.862677 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:23:50.939165 systemd[1]: Reloading finished in 1216 ms. Jan 17 00:23:50.973312 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:23:50.982789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:23:51.181993 systemd[1]: Starting ensure-sysext.service... Jan 17 00:23:51.278546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:23:51.461807 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:23:51.463494 systemd[1]: Reloading... Jan 17 00:23:51.572968 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:23:51.574225 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:23:51.754338 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:23:51.754861 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 17 00:23:51.755074 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Jan 17 00:23:51.760957 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:23:51.760979 systemd-tmpfiles[1393]: Skipping /boot Jan 17 00:23:51.811023 zram_generator::config[1418]: No configuration found. Jan 17 00:23:51.813160 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:23:51.813311 systemd-tmpfiles[1393]: Skipping /boot Jan 17 00:23:52.308402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:23:52.436423 systemd[1]: Reloading finished in 971 ms. Jan 17 00:23:52.617857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:23:52.730794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:23:52.896533 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:23:52.906817 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:23:52.922156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:23:52.961139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:23:52.973673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:52.974553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:23:52.977222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:23:52.998338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:23:53.009496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:23:53.015423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:23:53.018839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:53.019847 augenrules[1493]: No rules Jan 17 00:23:53.022663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:23:53.025893 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:23:53.056765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:23:53.057183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:23:53.062977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:23:53.063310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:23:53.071295 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:23:53.071710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:23:53.091267 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:23:53.103123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:53.103597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:23:53.113595 systemd-resolved[1477]: Positive Trust Anchors: Jan 17 00:23:53.113645 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:23:53.113689 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:23:53.118384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:23:53.122101 systemd-resolved[1477]: Defaulting to hostname 'linux'. Jan 17 00:23:53.148620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:23:53.157254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:23:53.176828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:23:53.193580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:23:53.198331 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:23:53.214166 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:23:53.217080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:23:53.223655 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:23:53.241578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:23:53.241846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:23:53.248975 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:23:53.249336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:23:53.256402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:23:53.256773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:23:53.270310 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:23:53.270976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:23:53.277447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:23:53.288716 systemd[1]: Finished ensure-sysext.service. Jan 17 00:23:53.305381 systemd[1]: Reached target network.target - Network. Jan 17 00:23:53.309583 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:23:53.315563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:23:53.328852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:23:53.330619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:23:53.348217 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:23:53.357995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:23:53.466331 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:23:53.474035 systemd-timesyncd[1529]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:23:53.474106 systemd-timesyncd[1529]: Initial clock synchronization to Sat 2026-01-17 00:23:53.497249 UTC. Jan 17 00:23:53.476804 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:23:53.491444 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:23:53.512633 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:23:53.520334 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:23:53.530764 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:23:53.530818 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:23:53.537020 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:23:53.543593 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:23:53.551717 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:23:53.559514 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:23:53.570133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:23:53.581856 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:23:53.594134 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:23:53.606746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:23:53.615610 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:23:53.620155 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:23:53.627232 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:23:53.627318 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:23:53.627361 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:23:53.631225 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:23:53.644289 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:23:53.658605 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:23:53.669964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:23:53.680655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:23:53.680963 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:23:53.691045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:23:53.703163 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:23:53.713657 jq[1537]: false Jan 17 00:23:53.714343 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:23:53.723736 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:23:53.739310 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:23:53.758269 extend-filesystems[1539]: Found loop3 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found loop4 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found loop5 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found sr0 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda1 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda2 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda3 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found usr Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda4 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda6 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda7 Jan 17 00:23:53.758269 extend-filesystems[1539]: Found vda9 Jan 17 00:23:53.758269 extend-filesystems[1539]: Checking size of /dev/vda9 Jan 17 00:23:53.850850 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:23:53.772151 dbus-daemon[1535]: [system] SELinux support is enabled Jan 17 00:23:53.858975 extend-filesystems[1539]: Resized partition /dev/vda9 Jan 17 00:23:53.759176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:23:53.879962 extend-filesystems[1567]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:23:53.804317 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:23:53.819079 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:23:53.821653 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:23:53.858718 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:23:53.893953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:23:53.910954 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1576) Jan 17 00:23:53.911043 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:23:53.911073 jq[1574]: true Jan 17 00:23:53.927654 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:23:53.928657 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:23:53.952174 update_engine[1573]: I20260117 00:23:53.948766 1573 main.cc:92] Flatcar Update Engine starting Jan 17 00:23:53.941746 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:23:53.943824 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:23:53.955207 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:23:53.955207 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:23:53.955207 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:23:53.993416 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Jan 17 00:23:54.005798 update_engine[1573]: I20260117 00:23:53.958498 1573 update_check_scheduler.cc:74] Next update check in 3m4s Jan 17 00:23:54.002602 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:23:54.010693 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:23:54.036420 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:23:54.069656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:23:54.070378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:23:54.087284 systemd-logind[1570]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:23:54.087317 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:23:54.089878 systemd-logind[1570]: New seat seat0. Jan 17 00:23:54.110961 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:23:54.158382 jq[1590]: true Jan 17 00:23:54.159748 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:23:54.160813 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:23:54.161450 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:23:54.236627 tar[1589]: linux-amd64/LICENSE Jan 17 00:23:54.236627 tar[1589]: linux-amd64/helm Jan 17 00:23:54.246978 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:23:54.249191 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:23:54.263618 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:23:54.264479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:23:54.265783 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:23:54.271314 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:23:54.280782 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:23:54.311205 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:23:54.339198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:23:54.657614 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:23:54.694637 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:23:54.700009 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:23:54.714056 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:23:54.751515 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:23:54.766087 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:23:55.102181 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:23:55.102890 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:23:55.122607 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:23:55.478360 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:23:55.495952 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:23:55.506288 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:23:55.514237 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:23:55.519326 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:23:56.753415 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:23:56.780456 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:32884.service - OpenSSH per-connection server daemon (10.0.0.1:32884). Jan 17 00:23:57.216386 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 32884 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:23:57.240813 containerd[1592]: time="2026-01-17T00:23:57.240260134Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:23:57.243593 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:57.263448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:23:57.276750 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:23:57.289234 systemd-logind[1570]: New session 1 of user core. Jan 17 00:23:57.305428 containerd[1592]: time="2026-01-17T00:23:57.305372471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.315679 containerd[1592]: time="2026-01-17T00:23:57.315622352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:23:57.315806 containerd[1592]: time="2026-01-17T00:23:57.315787108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:23:57.316104 containerd[1592]: time="2026-01-17T00:23:57.316075845Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:23:57.388793 containerd[1592]: time="2026-01-17T00:23:57.388558603Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:23:57.390466 containerd[1592]: time="2026-01-17T00:23:57.390444725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.390703 containerd[1592]: time="2026-01-17T00:23:57.390682254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:23:57.390757 containerd[1592]: time="2026-01-17T00:23:57.390745328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.391532 containerd[1592]: time="2026-01-17T00:23:57.391507540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:23:57.391609 containerd[1592]: time="2026-01-17T00:23:57.391595580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.391723 containerd[1592]: time="2026-01-17T00:23:57.391697372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:23:57.391797 containerd[1592]: time="2026-01-17T00:23:57.391780738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.392126 containerd[1592]: time="2026-01-17T00:23:57.392103891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.392981 containerd[1592]: time="2026-01-17T00:23:57.392960050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:23:57.393321 containerd[1592]: time="2026-01-17T00:23:57.393298850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:23:57.393391 containerd[1592]: time="2026-01-17T00:23:57.393371153Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:23:57.393571 containerd[1592]: time="2026-01-17T00:23:57.393551979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:23:57.393816 containerd[1592]: time="2026-01-17T00:23:57.393789517Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:23:57.412270 containerd[1592]: time="2026-01-17T00:23:57.412052775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:23:57.413477 containerd[1592]: time="2026-01-17T00:23:57.412324470Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:23:57.413477 containerd[1592]: time="2026-01-17T00:23:57.413472336Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:23:57.413559 containerd[1592]: time="2026-01-17T00:23:57.413496099Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:23:57.413559 containerd[1592]: time="2026-01-17T00:23:57.413538258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:23:57.413847 containerd[1592]: time="2026-01-17T00:23:57.413800293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:23:57.418541 containerd[1592]: time="2026-01-17T00:23:57.418250341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:23:57.418541 containerd[1592]: time="2026-01-17T00:23:57.418524583Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:23:57.418640 containerd[1592]: time="2026-01-17T00:23:57.418545617Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:23:57.418640 containerd[1592]: time="2026-01-17T00:23:57.418586092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:23:57.418640 containerd[1592]: time="2026-01-17T00:23:57.418603736Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418640 containerd[1592]: time="2026-01-17T00:23:57.418619535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418729 containerd[1592]: time="2026-01-17T00:23:57.418655666Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418729 containerd[1592]: time="2026-01-17T00:23:57.418674122Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418729 containerd[1592]: time="2026-01-17T00:23:57.418723494Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418802 containerd[1592]: time="2026-01-17T00:23:57.418741339Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418802 containerd[1592]: time="2026-01-17T00:23:57.418776787Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.418802 containerd[1592]: time="2026-01-17T00:23:57.418791863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.418870205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.418893696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.418962557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.418978899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419174017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419192956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419208123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419226167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419241926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419261246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419363570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419399761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419415640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419436093Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:23:57.419649 containerd[1592]: time="2026-01-17T00:23:57.419516841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419533642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419546782Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419640019Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419662488Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419676711Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419713183Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419727367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419776618Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419826101Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:23:57.421999 containerd[1592]: time="2026-01-17T00:23:57.419840093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:23:57.420663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:23:57.425574 containerd[1592]: time="2026-01-17T00:23:57.423766845Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:23:57.425574 containerd[1592]: time="2026-01-17T00:23:57.424093888Z" level=info msg="Connect containerd service" Jan 17 00:23:57.425574 containerd[1592]: time="2026-01-17T00:23:57.424436350Z" level=info msg="using legacy CRI server" Jan 17 00:23:57.425574 containerd[1592]: time="2026-01-17T00:23:57.424548334Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:23:57.443189 containerd[1592]: time="2026-01-17T00:23:57.442053177Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:23:57.443628 containerd[1592]: time="2026-01-17T00:23:57.443568923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:23:57.444331 containerd[1592]: time="2026-01-17T00:23:57.444091668Z" level=info msg="Start subscribing containerd event" Jan 17 00:23:57.444331 containerd[1592]: time="2026-01-17T00:23:57.444320973Z" level=info msg="Start recovering state" Jan 17 00:23:57.444679 containerd[1592]: time="2026-01-17T00:23:57.444502220Z" level=info msg="Start event monitor" Jan 17 00:23:57.444679 containerd[1592]: time="2026-01-17T00:23:57.444550839Z" level=info msg="Start snapshots syncer" Jan 17 00:23:57.444679 containerd[1592]: time="2026-01-17T00:23:57.444617744Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:23:57.444679 containerd[1592]: time="2026-01-17T00:23:57.444661759Z" level=info msg="Start streaming server" Jan 17 00:23:57.446890 tar[1589]: linux-amd64/README.md Jan 17 00:23:57.454859 containerd[1592]: time="2026-01-17T00:23:57.454571461Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:23:57.455390 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:23:57.455791 containerd[1592]: time="2026-01-17T00:23:57.455645810Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:23:57.461764 containerd[1592]: time="2026-01-17T00:23:57.461654435Z" level=info msg="containerd successfully booted in 0.391159s" Jan 17 00:23:57.466560 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:23:57.501151 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:23:57.502683 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:23:57.912479 systemd[1673]: Queued start job for default target default.target. Jan 17 00:23:57.913175 systemd[1673]: Created slice app.slice - User Application Slice. Jan 17 00:23:57.913207 systemd[1673]: Reached target paths.target - Paths. Jan 17 00:23:57.913230 systemd[1673]: Reached target timers.target - Timers. Jan 17 00:23:57.923144 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:23:57.963699 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:23:57.963956 systemd[1673]: Reached target sockets.target - Sockets. Jan 17 00:23:57.963993 systemd[1673]: Reached target basic.target - Basic System. Jan 17 00:23:57.964076 systemd[1673]: Reached target default.target - Main User Target. Jan 17 00:23:57.964142 systemd[1673]: Startup finished in 239ms. Jan 17 00:23:57.966248 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:23:58.368601 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:23:58.468241 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:32900.service - OpenSSH per-connection server daemon (10.0.0.1:32900). Jan 17 00:23:58.956226 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 32900 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:23:58.968196 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:59.113328 systemd-logind[1570]: New session 2 of user core. Jan 17 00:23:59.129658 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:23:59.317728 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:59.328334 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:32908.service - OpenSSH per-connection server daemon (10.0.0.1:32908). Jan 17 00:23:59.346799 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:32900.service: Deactivated successfully. Jan 17 00:23:59.356272 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:23:59.356717 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:23:59.360364 systemd-logind[1570]: Removed session 2. Jan 17 00:23:59.389774 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 32908 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:23:59.394003 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:59.404423 systemd-logind[1570]: New session 3 of user core. Jan 17 00:23:59.419330 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:23:59.521127 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:59.527429 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:32908.service: Deactivated successfully. Jan 17 00:23:59.538020 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:23:59.540800 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:23:59.542653 systemd-logind[1570]: Removed session 3. Jan 17 00:24:01.369650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:24:01.370396 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:24:01.372659 systemd[1]: Startup finished in 29.123s (kernel) + 24.503s (userspace) = 53.626s. Jan 17 00:24:01.381875 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:24:02.981820 kubelet[1718]: E0117 00:24:02.981357 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:24:02.988546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:24:02.989084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:24:09.607535 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:38488.service - OpenSSH per-connection server daemon (10.0.0.1:38488). Jan 17 00:24:09.895740 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38488 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:09.908485 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:09.951058 systemd-logind[1570]: New session 4 of user core. Jan 17 00:24:09.970549 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:24:10.121809 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:10.159080 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:38492.service - OpenSSH per-connection server daemon (10.0.0.1:38492). Jan 17 00:24:10.165514 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:38488.service: Deactivated successfully. Jan 17 00:24:10.176302 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:24:10.177584 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:24:10.183159 systemd-logind[1570]: Removed session 4. Jan 17 00:24:10.272811 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 38492 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:10.276670 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:10.290283 systemd-logind[1570]: New session 5 of user core. Jan 17 00:24:10.301413 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:24:10.376855 sshd[1732]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:10.401765 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:38494.service - OpenSSH per-connection server daemon (10.0.0.1:38494). Jan 17 00:24:10.402764 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:38492.service: Deactivated successfully. Jan 17 00:24:10.406269 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:24:10.412086 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:24:10.421320 systemd-logind[1570]: Removed session 5. Jan 17 00:24:10.490505 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38494 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:10.493462 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:10.509472 systemd-logind[1570]: New session 6 of user core. Jan 17 00:24:10.525093 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:24:10.627229 sshd[1740]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:10.652506 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:38496.service - OpenSSH per-connection server daemon (10.0.0.1:38496). Jan 17 00:24:10.653423 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:38494.service: Deactivated successfully. Jan 17 00:24:10.665358 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:24:10.667511 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:24:10.672110 systemd-logind[1570]: Removed session 6. Jan 17 00:24:10.712559 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 38496 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:10.715273 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:10.733443 systemd-logind[1570]: New session 7 of user core. Jan 17 00:24:10.739545 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:24:10.843263 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:24:10.843777 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:24:10.871632 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 17 00:24:10.880135 sshd[1748]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:10.892404 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:38504.service - OpenSSH per-connection server daemon (10.0.0.1:38504). Jan 17 00:24:10.893213 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:38496.service: Deactivated successfully. Jan 17 00:24:10.904507 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:24:10.907893 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:24:10.913589 systemd-logind[1570]: Removed session 7. Jan 17 00:24:10.958613 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 38504 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:10.961337 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:10.975300 systemd-logind[1570]: New session 8 of user core. Jan 17 00:24:10.988861 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:24:11.062678 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:24:11.065314 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:24:11.090366 sudo[1765]: pam_unix(sudo:session): session closed for user root Jan 17 00:24:11.108746 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:24:11.110858 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:24:11.151344 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:24:11.161101 auditctl[1768]: No rules Jan 17 00:24:11.162030 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:24:11.162582 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:24:11.187765 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:24:11.272315 augenrules[1787]: No rules Jan 17 00:24:11.280564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:24:11.283229 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 17 00:24:11.295434 sshd[1757]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:11.308387 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:38518.service - OpenSSH per-connection server daemon (10.0.0.1:38518). Jan 17 00:24:11.309227 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:38504.service: Deactivated successfully. Jan 17 00:24:11.322750 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:24:11.324758 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:24:11.329560 systemd-logind[1570]: Removed session 8. Jan 17 00:24:11.380002 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 38518 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:24:11.382242 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:11.396528 systemd-logind[1570]: New session 9 of user core. Jan 17 00:24:11.402538 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:24:11.501229 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:24:11.503889 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:24:13.205862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:24:13.222398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:24:16.721089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:24:16.727238 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:24:17.013327 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:24:17.032808 (dockerd)[1838]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:24:17.095479 kubelet[1829]: E0117 00:24:17.095287 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:24:17.104099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:24:17.104593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:24:20.935405 dockerd[1838]: time="2026-01-17T00:24:20.933818809Z" level=info msg="Starting up" Jan 17 00:24:22.429150 dockerd[1838]: time="2026-01-17T00:24:22.427058346Z" level=info msg="Loading containers: start." Jan 17 00:24:23.581422 kernel: Initializing XFRM netlink socket Jan 17 00:24:23.995892 systemd-networkd[1254]: docker0: Link UP Jan 17 00:24:24.042664 dockerd[1838]: time="2026-01-17T00:24:24.042521500Z" level=info msg="Loading containers: done." Jan 17 00:24:24.139988 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck509614908-merged.mount: Deactivated successfully. Jan 17 00:24:24.154027 dockerd[1838]: time="2026-01-17T00:24:24.153803769Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:24:24.154851 dockerd[1838]: time="2026-01-17T00:24:24.154208670Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:24:24.154851 dockerd[1838]: time="2026-01-17T00:24:24.154610916Z" level=info msg="Daemon has completed initialization" Jan 17 00:24:24.308238 dockerd[1838]: time="2026-01-17T00:24:24.301802910Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:24:24.311483 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:24:27.214447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:24:27.366802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:24:28.524780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:24:28.549295 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:24:29.179160 kubelet[2000]: E0117 00:24:29.178380 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:24:29.188135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:24:29.188432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:24:29.220386 containerd[1592]: time="2026-01-17T00:24:29.219217422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:24:31.522327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510574675.mount: Deactivated successfully. Jan 17 00:24:38.803073 update_engine[1573]: I20260117 00:24:38.789875 1573 update_attempter.cc:509] Updating boot flags... Jan 17 00:24:39.194207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:24:39.230746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:24:39.324073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2078) Jan 17 00:24:39.492667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2076) Jan 17 00:24:40.881254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:24:40.917234 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:24:41.493553 kubelet[2097]: E0117 00:24:41.493306 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:24:41.502811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:24:41.503241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:24:42.577457 containerd[1592]: time="2026-01-17T00:24:42.576107110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:42.582364 containerd[1592]: time="2026-01-17T00:24:42.580067349Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:24:42.583094 containerd[1592]: time="2026-01-17T00:24:42.582893290Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:42.590511 containerd[1592]: time="2026-01-17T00:24:42.590083970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:42.591681 containerd[1592]: time="2026-01-17T00:24:42.591606296Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 13.372193945s" Jan 17 00:24:42.591864 containerd[1592]: time="2026-01-17T00:24:42.591811903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:24:42.598878 containerd[1592]: time="2026-01-17T00:24:42.595746050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:24:49.825998 containerd[1592]: time="2026-01-17T00:24:49.825508929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.831724 containerd[1592]: time="2026-01-17T00:24:49.829477666Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:24:49.833243 containerd[1592]: time="2026-01-17T00:24:49.833149317Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.840725 containerd[1592]: time="2026-01-17T00:24:49.840626901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:49.843033 containerd[1592]: time="2026-01-17T00:24:49.842960650Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 7.24462557s" Jan 17 00:24:49.843115 containerd[1592]: time="2026-01-17T00:24:49.843087492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:24:49.892228 containerd[1592]: time="2026-01-17T00:24:49.890215748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:24:51.896324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:24:52.016843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:24:53.279245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:24:53.343101 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:24:54.134835 kubelet[2125]: E0117 00:24:54.134299 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:24:54.163718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:24:54.165956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:24:58.218605 containerd[1592]: time="2026-01-17T00:24:58.217811597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:58.226494 containerd[1592]: time="2026-01-17T00:24:58.226276055Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:24:58.228688 containerd[1592]: time="2026-01-17T00:24:58.228520679Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:58.244875 containerd[1592]: time="2026-01-17T00:24:58.240864684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:24:58.244875 containerd[1592]: time="2026-01-17T00:24:58.243134382Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 8.350876754s" Jan 17 00:24:58.244875 containerd[1592]: time="2026-01-17T00:24:58.244208326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:24:58.257339 containerd[1592]: time="2026-01-17T00:24:58.256385992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:25:01.909206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201594521.mount: Deactivated successfully. Jan 17 00:25:03.976073 containerd[1592]: time="2026-01-17T00:25:03.974371167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:03.982311 containerd[1592]: time="2026-01-17T00:25:03.981209052Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:25:03.986592 containerd[1592]: time="2026-01-17T00:25:03.983725855Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:03.993333 containerd[1592]: time="2026-01-17T00:25:03.993243552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:03.994752 containerd[1592]: time="2026-01-17T00:25:03.994653618Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 5.738200404s" Jan 17 00:25:03.994997 containerd[1592]: time="2026-01-17T00:25:03.994802462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:25:04.003080 containerd[1592]: time="2026-01-17T00:25:04.002739963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:25:04.204209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:25:04.229272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:04.586602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:04.612732 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:25:04.761617 kubelet[2156]: E0117 00:25:04.761462 2156 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:25:04.771821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:25:04.772268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:25:04.930174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237365945.mount: Deactivated successfully. Jan 17 00:25:12.594027 containerd[1592]: time="2026-01-17T00:25:12.592987570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:12.596451 containerd[1592]: time="2026-01-17T00:25:12.596344644Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:25:12.599304 containerd[1592]: time="2026-01-17T00:25:12.599059529Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:12.613407 containerd[1592]: time="2026-01-17T00:25:12.611836830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:12.614194 containerd[1592]: time="2026-01-17T00:25:12.613875872Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 8.611034802s" Jan 17 00:25:12.614194 containerd[1592]: time="2026-01-17T00:25:12.614158667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:25:12.617796 containerd[1592]: time="2026-01-17T00:25:12.617685804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:25:14.118443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625169980.mount: Deactivated successfully. Jan 17 00:25:14.148439 containerd[1592]: time="2026-01-17T00:25:14.148306868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:14.152961 containerd[1592]: time="2026-01-17T00:25:14.151791386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:25:14.156774 containerd[1592]: time="2026-01-17T00:25:14.156606996Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:14.161321 containerd[1592]: time="2026-01-17T00:25:14.161189005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:14.162639 containerd[1592]: time="2026-01-17T00:25:14.162388483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.544664634s" Jan 17 00:25:14.162639 containerd[1592]: time="2026-01-17T00:25:14.162510039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:25:14.165447 containerd[1592]: time="2026-01-17T00:25:14.165338469Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:25:14.950763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:25:14.966271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:15.112338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572445103.mount: Deactivated successfully. Jan 17 00:25:15.482086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:15.490369 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:25:15.600116 kubelet[2242]: E0117 00:25:15.600059 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:25:15.605523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:25:15.605801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:25:20.366068 containerd[1592]: time="2026-01-17T00:25:20.365703213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:20.368867 containerd[1592]: time="2026-01-17T00:25:20.368778813Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:25:20.370864 containerd[1592]: time="2026-01-17T00:25:20.370788052Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:20.387140 containerd[1592]: time="2026-01-17T00:25:20.386995014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:25:20.391822 containerd[1592]: time="2026-01-17T00:25:20.391691741Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.226294752s" Jan 17 00:25:20.391822 containerd[1592]: time="2026-01-17T00:25:20.391790427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:25:25.716825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 00:25:25.738239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:26.245070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:26.254276 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:25:26.446598 kubelet[2333]: E0117 00:25:26.446415 2333 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:25:26.457118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:25:26.457448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:25:26.780157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:26.805409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:26.922023 systemd[1]: Reloading requested from client PID 2350 ('systemctl') (unit session-9.scope)... Jan 17 00:25:26.922062 systemd[1]: Reloading... Jan 17 00:25:27.188025 zram_generator::config[2395]: No configuration found. Jan 17 00:25:27.480342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:25:27.605785 systemd[1]: Reloading finished in 682 ms. Jan 17 00:25:27.723809 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:27.733613 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:25:27.734151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:27.751464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:28.253372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:28.296494 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:25:32.159500 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:25:32.159500 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:25:32.159500 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:25:32.159500 kubelet[2453]: I0117 00:25:32.160255 2453 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:25:33.428573 kubelet[2453]: I0117 00:25:33.425081 2453 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:25:33.428573 kubelet[2453]: I0117 00:25:33.425185 2453 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:25:33.428573 kubelet[2453]: I0117 00:25:33.425709 2453 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:25:34.147533 kubelet[2453]: E0117 00:25:34.147026 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:34.153272 kubelet[2453]: I0117 00:25:34.152365 2453 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:25:34.195761 kubelet[2453]: E0117 00:25:34.193040 2453 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:25:34.195761 kubelet[2453]: I0117 00:25:34.193074 2453 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:25:34.243994 kubelet[2453]: I0117 00:25:34.243108 2453 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:25:34.244384 kubelet[2453]: I0117 00:25:34.244301 2453 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:25:34.244805 kubelet[2453]: I0117 00:25:34.244517 2453 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:25:34.245440 kubelet[2453]: I0117 00:25:34.245418 2453 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:25:34.251543 kubelet[2453]: I0117 00:25:34.250686 2453 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:25:34.251543 kubelet[2453]: I0117 00:25:34.251174 2453 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:25:34.292984 kubelet[2453]: I0117 00:25:34.292703 2453 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:25:34.292984 kubelet[2453]: I0117 00:25:34.292814 2453 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:25:34.292984 kubelet[2453]: I0117 00:25:34.292857 2453 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:25:34.292984 kubelet[2453]: I0117 00:25:34.292874 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:25:34.312904 kubelet[2453]: W0117 00:25:34.311755 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:34.345838 kubelet[2453]: E0117 00:25:34.315080 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:34.383496 kubelet[2453]: W0117 00:25:34.344622 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:34.383496 kubelet[2453]: E0117 00:25:34.353164 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:34.480708 kubelet[2453]: I0117 00:25:34.463667 2453 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:25:34.480708 kubelet[2453]: I0117 00:25:34.474868 2453 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:25:34.480708 kubelet[2453]: W0117 00:25:34.475117 2453 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:25:34.492997 kubelet[2453]: I0117 00:25:34.489365 2453 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:25:34.492997 kubelet[2453]: I0117 00:25:34.489453 2453 server.go:1287] "Started kubelet" Jan 17 00:25:34.493146 kubelet[2453]: I0117 00:25:34.493023 2453 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:25:34.496977 kubelet[2453]: I0117 00:25:34.494782 2453 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:25:34.496977 kubelet[2453]: I0117 00:25:34.496272 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:25:34.496977 kubelet[2453]: I0117 00:25:34.496696 2453 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:25:34.500617 kubelet[2453]: I0117 00:25:34.500413 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:25:34.504985 kubelet[2453]: I0117 00:25:34.502891 2453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:25:34.514253 kubelet[2453]: I0117 00:25:34.514229 2453 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:25:34.517004 kubelet[2453]: E0117 00:25:34.516978 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:25:34.517982 kubelet[2453]: I0117 00:25:34.517632 2453 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:25:34.518190 kubelet[2453]: I0117 00:25:34.518174 2453 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:25:34.523479 kubelet[2453]: E0117 00:25:34.523263 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Jan 17 00:25:34.524571 kubelet[2453]: W0117 00:25:34.524525 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:34.526615 kubelet[2453]: E0117 00:25:34.526478 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:34.529150 kubelet[2453]: I0117 00:25:34.527801 2453 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:25:34.529420 kubelet[2453]: I0117 00:25:34.529398 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:25:34.530100 kubelet[2453]: E0117 00:25:34.528356 2453 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:25:34.537729 kubelet[2453]: I0117 00:25:34.537693 2453 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:25:34.565359 kubelet[2453]: E0117 00:25:34.518213 2453 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5d00499c33be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:25:34.489408446 +0000 UTC m=+6.181372074,LastTimestamp:2026-01-17 00:25:34.489408446 +0000 UTC m=+6.181372074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:25:34.617891 kubelet[2453]: E0117 00:25:34.617682 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:25:34.684050 kubelet[2453]: I0117 00:25:34.678604 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:25:34.759417 kubelet[2453]: I0117 00:25:34.718158 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:25:34.759417 kubelet[2453]: E0117 00:25:34.742079 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:25:34.759417 kubelet[2453]: E0117 00:25:34.744772 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Jan 17 00:25:34.773015 kubelet[2453]: I0117 00:25:34.768020 2453 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:25:34.773015 kubelet[2453]: I0117 00:25:34.768209 2453 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:25:34.773015 kubelet[2453]: I0117 00:25:34.768244 2453 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:25:34.773015 kubelet[2453]: E0117 00:25:34.768521 2453 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:25:34.773015 kubelet[2453]: W0117 00:25:34.769762 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:34.773015 kubelet[2453]: E0117 00:25:34.769800 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:34.780752 kubelet[2453]: I0117 00:25:34.780060 2453 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:25:34.786274 kubelet[2453]: I0117 00:25:34.782750 2453 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:25:34.786274 kubelet[2453]: I0117 00:25:34.782971 2453 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:25:34.845905 kubelet[2453]: E0117 00:25:34.843194 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:25:34.870386 kubelet[2453]: I0117 00:25:34.847543 2453 policy_none.go:49] "None policy: Start" Jan 17 00:25:34.870386 kubelet[2453]: I0117 00:25:34.857739 2453 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:25:34.870386 kubelet[2453]: I0117 00:25:34.858339 2453 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:25:34.903175 kubelet[2453]: E0117 00:25:34.873378 2453 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:25:34.945786 kubelet[2453]: E0117 00:25:34.945162 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:25:34.968238 kubelet[2453]: I0117 00:25:34.967491 2453 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:25:34.982441 kubelet[2453]: I0117 00:25:34.969470 2453 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:25:34.982441 kubelet[2453]: I0117 00:25:34.969494 2453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:25:34.982441 kubelet[2453]: I0117 00:25:34.979647 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:25:35.062538 kubelet[2453]: E0117 00:25:35.053767 2453 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:25:35.062538 kubelet[2453]: E0117 00:25:35.054083 2453 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:25:35.099136 kubelet[2453]: I0117 00:25:35.098435 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:35.102810 kubelet[2453]: E0117 00:25:35.100758 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jan 17 00:25:35.129732 kubelet[2453]: I0117 00:25:35.128743 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:35.129732 kubelet[2453]: I0117 00:25:35.129043 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:35.129732 kubelet[2453]: I0117 00:25:35.129074 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:35.129732 kubelet[2453]: I0117 00:25:35.129095 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:35.129732 kubelet[2453]: I0117 00:25:35.129117 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:35.177686 kubelet[2453]: I0117 00:25:35.129191 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:35.177686 kubelet[2453]: I0117 00:25:35.129216 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:35.177686 kubelet[2453]: I0117 00:25:35.129237 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:35.177686 kubelet[2453]: I0117 00:25:35.129323 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:35.178058 kubelet[2453]: E0117 00:25:35.177755 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Jan 17 00:25:35.214367 kubelet[2453]: E0117 00:25:35.213786 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:35.220988 kubelet[2453]: E0117 00:25:35.217672 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:35.357857 kubelet[2453]: W0117 00:25:35.323644 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:35.357857 kubelet[2453]: E0117 00:25:35.324018 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:35.358771 kubelet[2453]: I0117 00:25:35.356887 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:35.358771 kubelet[2453]: W0117 00:25:35.358461 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:35.358771 kubelet[2453]: E0117 00:25:35.358561 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:35.359630 kubelet[2453]: E0117 00:25:35.359100 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jan 17 00:25:35.380272 kubelet[2453]: E0117 00:25:35.378028 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:35.380272 kubelet[2453]: E0117 00:25:35.379626 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:35.385851 containerd[1592]: time="2026-01-17T00:25:35.385671276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:35.524629 kubelet[2453]: E0117 00:25:35.523251 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:35.524629 kubelet[2453]: E0117 00:25:35.524346 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:35.529036 containerd[1592]: time="2026-01-17T00:25:35.528997802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:35.534558 containerd[1592]: time="2026-01-17T00:25:35.529116734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1cdd34d3db4f94e625e766f0973d3c65,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:35.839420 kubelet[2453]: I0117 00:25:35.830539 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:35.876244 kubelet[2453]: E0117 00:25:35.867457 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jan 17 00:25:35.985133 kubelet[2453]: E0117 00:25:35.984695 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="1.6s" Jan 17 00:25:36.116529 kubelet[2453]: W0117 00:25:36.108556 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:36.116529 kubelet[2453]: E0117 00:25:36.115479 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:36.180216 kubelet[2453]: E0117 00:25:36.180060 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:36.378523 kubelet[2453]: W0117 00:25:36.370606 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:36.378523 kubelet[2453]: E0117 00:25:36.370663 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:36.677222 kubelet[2453]: I0117 00:25:36.676851 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:36.678905 kubelet[2453]: E0117 00:25:36.677610 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jan 17 00:25:36.697444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991921389.mount: Deactivated successfully. Jan 17 00:25:36.724104 containerd[1592]: time="2026-01-17T00:25:36.723508534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:25:36.748142 containerd[1592]: time="2026-01-17T00:25:36.747404624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:25:36.775461 containerd[1592]: time="2026-01-17T00:25:36.774717542Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:25:36.785359 containerd[1592]: time="2026-01-17T00:25:36.781131196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:25:36.785359 containerd[1592]: time="2026-01-17T00:25:36.786134721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:25:36.791401 containerd[1592]: time="2026-01-17T00:25:36.789876516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:25:36.792554 containerd[1592]: time="2026-01-17T00:25:36.792506666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:25:36.851838 containerd[1592]: time="2026-01-17T00:25:36.851125760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:25:36.863107 containerd[1592]: time="2026-01-17T00:25:36.856811730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.467971223s" Jan 17 00:25:36.869764 containerd[1592]: time="2026-01-17T00:25:36.869481961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.340065235s" Jan 17 00:25:36.874621 containerd[1592]: time="2026-01-17T00:25:36.873825337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.343691994s" Jan 17 00:25:37.395890 kubelet[2453]: W0117 00:25:37.391982 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:37.395890 kubelet[2453]: E0117 00:25:37.392428 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:37.590064 kubelet[2453]: E0117 00:25:37.589474 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="3.2s" Jan 17 00:25:37.627395 kubelet[2453]: W0117 00:25:37.627323 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:37.627395 kubelet[2453]: E0117 00:25:37.627394 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:37.942504 kubelet[2453]: W0117 00:25:37.941246 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:37.976657 kubelet[2453]: E0117 00:25:37.943567 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:38.110605 containerd[1592]: time="2026-01-17T00:25:38.103281403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:38.110605 containerd[1592]: time="2026-01-17T00:25:38.110147450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:38.110605 containerd[1592]: time="2026-01-17T00:25:38.110167918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.111690 containerd[1592]: time="2026-01-17T00:25:38.110807604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.124591 containerd[1592]: time="2026-01-17T00:25:38.123576582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:38.124591 containerd[1592]: time="2026-01-17T00:25:38.123717072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:38.124591 containerd[1592]: time="2026-01-17T00:25:38.123822115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.124591 containerd[1592]: time="2026-01-17T00:25:38.124204950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.130324 containerd[1592]: time="2026-01-17T00:25:38.129306210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:38.130324 containerd[1592]: time="2026-01-17T00:25:38.129368191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:38.130324 containerd[1592]: time="2026-01-17T00:25:38.129384639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.130324 containerd[1592]: time="2026-01-17T00:25:38.129586302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:38.317832 kubelet[2453]: I0117 00:25:38.317791 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:38.319610 kubelet[2453]: E0117 00:25:38.319529 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Jan 17 00:25:38.417836 kubelet[2453]: W0117 00:25:38.417384 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.56:6443: connect: connection refused Jan 17 00:25:38.417836 kubelet[2453]: E0117 00:25:38.417527 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:25:38.697998 containerd[1592]: time="2026-01-17T00:25:38.693682828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6119d6e8fc5356ad4398739dd77b05a4d6bb524a39eec5d0b4a21cb404a7d9b\"" Jan 17 00:25:38.698631 kubelet[2453]: E0117 00:25:38.696452 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:38.703103 containerd[1592]: time="2026-01-17T00:25:38.701110910Z" level=info msg="CreateContainer within sandbox \"c6119d6e8fc5356ad4398739dd77b05a4d6bb524a39eec5d0b4a21cb404a7d9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:25:38.731987 containerd[1592]: time="2026-01-17T00:25:38.731855750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1cdd34d3db4f94e625e766f0973d3c65,Namespace:kube-system,Attempt:0,} returns sandbox id \"d439e82fa42c620b016f66a1446e91185d01fcab39b369ef40c83425b38ddc31\"" Jan 17 00:25:38.740491 kubelet[2453]: E0117 00:25:38.734327 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:38.743882 containerd[1592]: time="2026-01-17T00:25:38.743747930Z" level=info msg="CreateContainer within sandbox \"d439e82fa42c620b016f66a1446e91185d01fcab39b369ef40c83425b38ddc31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:25:38.778845 containerd[1592]: time="2026-01-17T00:25:38.778696764Z" level=info msg="CreateContainer within sandbox \"c6119d6e8fc5356ad4398739dd77b05a4d6bb524a39eec5d0b4a21cb404a7d9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a332b614afef2bb41e0b0c649b40046880f7f73eb99c7f21e35067fcc3744fec\"" Jan 17 00:25:38.781804 containerd[1592]: time="2026-01-17T00:25:38.781764996Z" level=info msg="StartContainer for \"a332b614afef2bb41e0b0c649b40046880f7f73eb99c7f21e35067fcc3744fec\"" Jan 17 00:25:38.802665 containerd[1592]: time="2026-01-17T00:25:38.802469029Z" level=info msg="CreateContainer within sandbox \"d439e82fa42c620b016f66a1446e91185d01fcab39b369ef40c83425b38ddc31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f77e0abaa1626793de262f5f407f241f1240f8388b3e1d7cdbd44a3891bda61\"" Jan 17 00:25:38.803626 containerd[1592]: time="2026-01-17T00:25:38.803510161Z" level=info msg="StartContainer for \"4f77e0abaa1626793de262f5f407f241f1240f8388b3e1d7cdbd44a3891bda61\"" Jan 17 00:25:38.807064 containerd[1592]: time="2026-01-17T00:25:38.807028963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b7c8c8e9688dcb44ff6c615c89e918a5f3f3a6334c41409f154908f45cc52bb\"" Jan 17 00:25:38.808300 kubelet[2453]: E0117 00:25:38.808234 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:38.810588 containerd[1592]: time="2026-01-17T00:25:38.810473433Z" level=info msg="CreateContainer within sandbox \"6b7c8c8e9688dcb44ff6c615c89e918a5f3f3a6334c41409f154908f45cc52bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:25:38.852038 containerd[1592]: time="2026-01-17T00:25:38.850070701Z" level=info msg="CreateContainer within sandbox \"6b7c8c8e9688dcb44ff6c615c89e918a5f3f3a6334c41409f154908f45cc52bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f74a73d2e1f24fc6d5a045481982c1cc3028697c6c4fc9ba225814c9e700dd4\"" Jan 17 00:25:38.852038 containerd[1592]: time="2026-01-17T00:25:38.851123460Z" level=info msg="StartContainer for \"1f74a73d2e1f24fc6d5a045481982c1cc3028697c6c4fc9ba225814c9e700dd4\"" Jan 17 00:25:39.442990 systemd[1]: run-containerd-runc-k8s.io-1f74a73d2e1f24fc6d5a045481982c1cc3028697c6c4fc9ba225814c9e700dd4-runc.MJChFT.mount: Deactivated successfully. Jan 17 00:25:39.472734 containerd[1592]: time="2026-01-17T00:25:39.472676836Z" level=info msg="StartContainer for \"4f77e0abaa1626793de262f5f407f241f1240f8388b3e1d7cdbd44a3891bda61\" returns successfully" Jan 17 00:25:39.529874 containerd[1592]: time="2026-01-17T00:25:39.529757960Z" level=info msg="StartContainer for \"a332b614afef2bb41e0b0c649b40046880f7f73eb99c7f21e35067fcc3744fec\" returns successfully" Jan 17 00:25:39.924872 containerd[1592]: time="2026-01-17T00:25:39.924718562Z" level=info msg="StartContainer for \"1f74a73d2e1f24fc6d5a045481982c1cc3028697c6c4fc9ba225814c9e700dd4\" returns successfully" Jan 17 00:25:40.710680 kubelet[2453]: E0117 00:25:40.701878 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:40.766775 kubelet[2453]: E0117 00:25:40.717701 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:40.930375 kubelet[2453]: E0117 00:25:40.889878 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:40.930375 kubelet[2453]: E0117 00:25:40.908087 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:41.048624 kubelet[2453]: E0117 00:25:41.047022 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:41.048624 kubelet[2453]: E0117 00:25:41.047329 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:41.535719 kubelet[2453]: I0117 00:25:41.535221 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.250310 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.250651 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.251409 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.251516 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.252026 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:42.251882 kubelet[2453]: E0117 00:25:42.252145 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:43.399589 kubelet[2453]: E0117 00:25:43.397186 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:43.399589 kubelet[2453]: E0117 00:25:43.397446 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:43.399589 kubelet[2453]: E0117 00:25:43.397843 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:43.399589 kubelet[2453]: E0117 00:25:43.398801 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:25:43.399589 kubelet[2453]: E0117 00:25:43.399153 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:43.404320 kubelet[2453]: E0117 00:25:43.398020 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:45.063724 kubelet[2453]: E0117 00:25:45.062702 2453 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:25:45.220758 kubelet[2453]: E0117 00:25:45.217564 2453 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 00:25:45.324240 kubelet[2453]: I0117 00:25:45.316848 2453 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:25:45.324240 kubelet[2453]: I0117 00:25:45.318339 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:45.364375 kubelet[2453]: E0117 00:25:45.363734 2453 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5d00499c33be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:25:34.489408446 +0000 UTC m=+6.181372074,LastTimestamp:2026-01-17 00:25:34.489408446 +0000 UTC m=+6.181372074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:25:45.364375 kubelet[2453]: E0117 00:25:45.364207 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:45.364375 kubelet[2453]: I0117 00:25:45.364228 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:45.370414 kubelet[2453]: E0117 00:25:45.370255 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:45.370414 kubelet[2453]: I0117 00:25:45.370283 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:45.374178 kubelet[2453]: E0117 00:25:45.374096 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:45.585287 kubelet[2453]: I0117 00:25:45.585143 2453 apiserver.go:52] "Watching apiserver" Jan 17 00:25:45.624352 kubelet[2453]: I0117 00:25:45.624190 2453 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:25:47.862116 kubelet[2453]: I0117 00:25:47.828798 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:48.599878 kubelet[2453]: E0117 00:25:48.597431 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:49.595946 kubelet[2453]: E0117 00:25:49.594878 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:49.857071 systemd[1]: Reloading requested from client PID 2729 ('systemctl') (unit session-9.scope)... Jan 17 00:25:49.857136 systemd[1]: Reloading... Jan 17 00:25:50.291044 zram_generator::config[2774]: No configuration found. Jan 17 00:25:50.913532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:25:51.269856 kubelet[2453]: I0117 00:25:51.269515 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:51.301756 kubelet[2453]: E0117 00:25:51.295037 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:51.346797 systemd[1]: Reloading finished in 1488 ms. Jan 17 00:25:51.404795 kubelet[2453]: I0117 00:25:51.396035 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.396011803 podStartE2EDuration="3.396011803s" podCreationTimestamp="2026-01-17 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:51.354572133 +0000 UTC m=+23.046535762" watchObservedRunningTime="2026-01-17 00:25:51.396011803 +0000 UTC m=+23.087975431" Jan 17 00:25:51.408750 kubelet[2453]: I0117 00:25:51.408515 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.40849012 podStartE2EDuration="408.49012ms" podCreationTimestamp="2026-01-17 00:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:51.405811626 +0000 UTC m=+23.097775264" watchObservedRunningTime="2026-01-17 00:25:51.40849012 +0000 UTC m=+23.100453758" Jan 17 00:25:51.477358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:51.515440 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:25:51.517720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:51.533634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:25:52.491206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:25:52.499445 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:25:52.692021 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:25:52.692021 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:25:52.692021 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:25:52.692727 kubelet[2823]: I0117 00:25:52.692212 2823 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:25:52.714385 kubelet[2823]: I0117 00:25:52.714062 2823 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:25:52.714385 kubelet[2823]: I0117 00:25:52.714091 2823 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:25:52.714385 kubelet[2823]: I0117 00:25:52.714347 2823 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:25:52.716727 kubelet[2823]: I0117 00:25:52.716450 2823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:25:52.722219 kubelet[2823]: I0117 00:25:52.721786 2823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:25:52.739331 kubelet[2823]: E0117 00:25:52.739217 2823 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:25:52.740660 kubelet[2823]: I0117 00:25:52.739645 2823 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:25:52.774785 kubelet[2823]: I0117 00:25:52.773359 2823 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:25:52.775980 kubelet[2823]: I0117 00:25:52.774833 2823 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:25:52.775980 kubelet[2823]: I0117 00:25:52.774982 2823 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:25:52.775980 kubelet[2823]: I0117 00:25:52.775771 2823 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:25:52.775980 kubelet[2823]: I0117 00:25:52.775786 2823 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:25:52.776499 kubelet[2823]: I0117 00:25:52.775851 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:25:52.776499 kubelet[2823]: I0117 00:25:52.776330 2823 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:25:52.776499 kubelet[2823]: I0117 00:25:52.776363 2823 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:25:52.776499 kubelet[2823]: I0117 00:25:52.776385 2823 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:25:52.776499 kubelet[2823]: I0117 00:25:52.776400 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:25:52.779964 kubelet[2823]: I0117 00:25:52.779827 2823 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:25:52.781862 kubelet[2823]: I0117 00:25:52.781799 2823 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:25:52.784671 kubelet[2823]: I0117 00:25:52.784434 2823 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:25:52.784671 kubelet[2823]: I0117 00:25:52.784502 2823 server.go:1287] "Started kubelet" Jan 17 00:25:52.791616 kubelet[2823]: I0117 00:25:52.790790 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:25:52.799332 kubelet[2823]: E0117 00:25:52.799086 2823 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:25:52.799332 kubelet[2823]: I0117 00:25:52.799316 2823 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:25:52.800793 kubelet[2823]: I0117 00:25:52.800114 2823 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:25:52.800793 kubelet[2823]: I0117 00:25:52.800291 2823 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:25:52.800793 kubelet[2823]: I0117 00:25:52.800585 2823 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:25:52.802029 kubelet[2823]: I0117 00:25:52.801689 2823 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:25:52.802249 kubelet[2823]: I0117 00:25:52.802075 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:25:52.802402 kubelet[2823]: I0117 00:25:52.802365 2823 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:25:52.802616 kubelet[2823]: I0117 00:25:52.802554 2823 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:25:52.809440 kubelet[2823]: I0117 00:25:52.809345 2823 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:25:52.809440 kubelet[2823]: I0117 00:25:52.809364 2823 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:25:52.809601 kubelet[2823]: I0117 00:25:52.809443 2823 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:25:52.814803 sudo[2840]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:25:52.816423 sudo[2840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:25:52.830736 kubelet[2823]: I0117 00:25:52.830671 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:25:52.836955 kubelet[2823]: I0117 00:25:52.836769 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:25:52.836955 kubelet[2823]: I0117 00:25:52.836831 2823 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:25:52.836955 kubelet[2823]: I0117 00:25:52.836855 2823 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:25:52.836955 kubelet[2823]: I0117 00:25:52.836864 2823 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:25:52.837411 kubelet[2823]: E0117 00:25:52.836974 2823 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:25:52.931607 kubelet[2823]: I0117 00:25:52.931561 2823 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:25:52.931607 kubelet[2823]: I0117 00:25:52.931581 2823 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:25:52.931607 kubelet[2823]: I0117 00:25:52.931603 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:25:52.931843 kubelet[2823]: I0117 00:25:52.931804 2823 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:25:52.931843 kubelet[2823]: I0117 00:25:52.931817 2823 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:25:52.931843 kubelet[2823]: I0117 00:25:52.931837 2823 policy_none.go:49] "None policy: Start" Jan 17 00:25:52.932041 kubelet[2823]: I0117 00:25:52.931849 2823 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:25:52.932041 kubelet[2823]: I0117 00:25:52.931870 2823 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:25:52.932129 kubelet[2823]: I0117 00:25:52.932051 2823 state_mem.go:75] "Updated machine memory state" Jan 17 00:25:52.933849 kubelet[2823]: I0117 00:25:52.933830 2823 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:25:52.937203 kubelet[2823]: E0117 00:25:52.937135 2823 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:25:52.938334 kubelet[2823]: I0117 00:25:52.938256 2823 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:25:52.938334 kubelet[2823]: I0117 00:25:52.938285 2823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:25:52.938939 kubelet[2823]: I0117 00:25:52.938724 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:25:52.951157 kubelet[2823]: E0117 00:25:52.949329 2823 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:25:53.064826 kubelet[2823]: I0117 00:25:53.064468 2823 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:25:53.112988 kubelet[2823]: I0117 00:25:53.104577 2823 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:25:53.112988 kubelet[2823]: I0117 00:25:53.104736 2823 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:25:53.168174 kubelet[2823]: I0117 00:25:53.165619 2823 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:53.168174 kubelet[2823]: I0117 00:25:53.165131 2823 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:53.168174 kubelet[2823]: I0117 00:25:53.166658 2823 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.205536 kubelet[2823]: I0117 00:25:53.204682 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:53.205536 kubelet[2823]: I0117 00:25:53.205009 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:53.205536 kubelet[2823]: I0117 00:25:53.205036 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:53.205536 kubelet[2823]: I0117 00:25:53.205059 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cdd34d3db4f94e625e766f0973d3c65-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1cdd34d3db4f94e625e766f0973d3c65\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:53.205536 kubelet[2823]: I0117 00:25:53.205154 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.237874 kubelet[2823]: I0117 00:25:53.205180 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.237874 kubelet[2823]: I0117 00:25:53.205281 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.237874 kubelet[2823]: I0117 00:25:53.205306 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.237874 kubelet[2823]: I0117 00:25:53.205328 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:25:53.258137 kubelet[2823]: E0117 00:25:53.256464 2823 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:25:53.258137 kubelet[2823]: E0117 00:25:53.257236 2823 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:25:53.599543 kubelet[2823]: E0117 00:25:53.594731 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:53.599543 kubelet[2823]: E0117 00:25:53.597507 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:53.605181 kubelet[2823]: E0117 00:25:53.604890 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:53.779460 kubelet[2823]: I0117 00:25:53.779400 2823 apiserver.go:52] "Watching apiserver" Jan 17 00:25:53.801160 kubelet[2823]: I0117 00:25:53.801077 2823 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:25:53.884568 kubelet[2823]: E0117 00:25:53.884374 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:53.890131 kubelet[2823]: E0117 00:25:53.885309 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:53.890131 kubelet[2823]: E0117 00:25:53.886373 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:54.043957 kubelet[2823]: I0117 00:25:54.042648 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.042624966 podStartE2EDuration="1.042624966s" podCreationTimestamp="2026-01-17 00:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:54.016076973 +0000 UTC m=+1.490270832" watchObservedRunningTime="2026-01-17 00:25:54.042624966 +0000 UTC m=+1.516818765" Jan 17 00:25:54.260946 kubelet[2823]: I0117 00:25:54.258572 2823 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:25:54.260946 kubelet[2823]: I0117 00:25:54.260172 2823 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:25:54.261159 containerd[1592]: time="2026-01-17T00:25:54.259524971Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:25:54.695593 sudo[2840]: pam_unix(sudo:session): session closed for user root Jan 17 00:25:54.970132 kubelet[2823]: E0117 00:25:54.966590 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:54.991258 kubelet[2823]: E0117 00:25:54.987495 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:55.679468 kubelet[2823]: I0117 00:25:55.679325 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81effb2b-bbb6-482e-9299-5f377832d37d-lib-modules\") pod \"kube-proxy-9dd46\" (UID: \"81effb2b-bbb6-482e-9299-5f377832d37d\") " pod="kube-system/kube-proxy-9dd46" Jan 17 00:25:55.681752 kubelet[2823]: I0117 00:25:55.680804 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/81effb2b-bbb6-482e-9299-5f377832d37d-kube-proxy\") pod \"kube-proxy-9dd46\" (UID: \"81effb2b-bbb6-482e-9299-5f377832d37d\") " pod="kube-system/kube-proxy-9dd46" Jan 17 00:25:55.681752 kubelet[2823]: I0117 00:25:55.681676 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81effb2b-bbb6-482e-9299-5f377832d37d-xtables-lock\") pod \"kube-proxy-9dd46\" (UID: \"81effb2b-bbb6-482e-9299-5f377832d37d\") " pod="kube-system/kube-proxy-9dd46" Jan 17 00:25:55.682373 kubelet[2823]: I0117 00:25:55.682226 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqnf\" (UniqueName: \"kubernetes.io/projected/81effb2b-bbb6-482e-9299-5f377832d37d-kube-api-access-djqnf\") pod \"kube-proxy-9dd46\" (UID: \"81effb2b-bbb6-482e-9299-5f377832d37d\") " pod="kube-system/kube-proxy-9dd46" Jan 17 00:25:55.891970 kubelet[2823]: E0117 00:25:55.891853 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:55.893172 containerd[1592]: time="2026-01-17T00:25:55.892783215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dd46,Uid:81effb2b-bbb6-482e-9299-5f377832d37d,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:55.965964 kubelet[2823]: E0117 00:25:55.963319 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:56.161706 containerd[1592]: time="2026-01-17T00:25:56.161017785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:56.164170 containerd[1592]: time="2026-01-17T00:25:56.161191098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:56.164170 containerd[1592]: time="2026-01-17T00:25:56.161217343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:56.164170 containerd[1592]: time="2026-01-17T00:25:56.161585011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:56.366975 containerd[1592]: time="2026-01-17T00:25:56.365568021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dd46,Uid:81effb2b-bbb6-482e-9299-5f377832d37d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8462a877cf1fdece807000e19c87cf48ff8df927962769e309fc3c3915eb1080\"" Jan 17 00:25:56.372239 kubelet[2823]: E0117 00:25:56.370274 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:56.377817 containerd[1592]: time="2026-01-17T00:25:56.377787947Z" level=info msg="CreateContainer within sandbox \"8462a877cf1fdece807000e19c87cf48ff8df927962769e309fc3c3915eb1080\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:25:56.422686 containerd[1592]: time="2026-01-17T00:25:56.422556154Z" level=info msg="CreateContainer within sandbox \"8462a877cf1fdece807000e19c87cf48ff8df927962769e309fc3c3915eb1080\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"90721e84ee19e12abbc131cfea7e5bf310f27e32ebf9703a51469278dda1fc74\"" Jan 17 00:25:56.426325 containerd[1592]: time="2026-01-17T00:25:56.425982447Z" level=info msg="StartContainer for \"90721e84ee19e12abbc131cfea7e5bf310f27e32ebf9703a51469278dda1fc74\"" Jan 17 00:25:56.888883 containerd[1592]: time="2026-01-17T00:25:56.885279271Z" level=info msg="StartContainer for \"90721e84ee19e12abbc131cfea7e5bf310f27e32ebf9703a51469278dda1fc74\" returns successfully" Jan 17 00:25:56.990855 kubelet[2823]: E0117 00:25:56.990356 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:57.078283 kubelet[2823]: I0117 00:25:57.074728 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dd46" podStartSLOduration=2.074634028 podStartE2EDuration="2.074634028s" podCreationTimestamp="2026-01-17 00:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:25:57.029455815 +0000 UTC m=+4.503649614" watchObservedRunningTime="2026-01-17 00:25:57.074634028 +0000 UTC m=+4.548827836" Jan 17 00:25:57.324713 kubelet[2823]: I0117 00:25:57.323342 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-net\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.324962 kubelet[2823]: I0117 00:25:57.324842 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hostproc\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.324962 kubelet[2823]: I0117 00:25:57.324882 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-etc-cni-netd\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325074 kubelet[2823]: I0117 00:25:57.324984 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-clustermesh-secrets\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325074 kubelet[2823]: I0117 00:25:57.325033 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb28647-22cf-468a-9608-1a80b7f73111-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zfdkf\" (UID: \"3bb28647-22cf-468a-9608-1a80b7f73111\") " pod="kube-system/cilium-operator-6c4d7847fc-zfdkf" Jan 17 00:25:57.325074 kubelet[2823]: I0117 00:25:57.325068 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-cgroup\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325227 kubelet[2823]: I0117 00:25:57.325092 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-config-path\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325227 kubelet[2823]: I0117 00:25:57.325114 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hubble-tls\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325227 kubelet[2823]: I0117 00:25:57.325135 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmqf\" (UniqueName: \"kubernetes.io/projected/3bb28647-22cf-468a-9608-1a80b7f73111-kube-api-access-znmqf\") pod \"cilium-operator-6c4d7847fc-zfdkf\" (UID: \"3bb28647-22cf-468a-9608-1a80b7f73111\") " pod="kube-system/cilium-operator-6c4d7847fc-zfdkf" Jan 17 00:25:57.325227 kubelet[2823]: I0117 00:25:57.325161 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cni-path\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325227 kubelet[2823]: I0117 00:25:57.325188 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-kernel\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325574 kubelet[2823]: I0117 00:25:57.325217 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-run\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325574 kubelet[2823]: I0117 00:25:57.325236 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-bpf-maps\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325574 kubelet[2823]: I0117 00:25:57.325259 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-xtables-lock\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325574 kubelet[2823]: I0117 00:25:57.325294 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-lib-modules\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.325574 kubelet[2823]: I0117 00:25:57.325325 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmvj\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-kube-api-access-4zmvj\") pod \"cilium-wdzrv\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " pod="kube-system/cilium-wdzrv" Jan 17 00:25:57.815683 kubelet[2823]: E0117 00:25:57.815541 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:57.818244 containerd[1592]: time="2026-01-17T00:25:57.817602463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdzrv,Uid:9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:57.842382 kubelet[2823]: E0117 00:25:57.842240 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:57.879837 containerd[1592]: time="2026-01-17T00:25:57.875754244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zfdkf,Uid:3bb28647-22cf-468a-9608-1a80b7f73111,Namespace:kube-system,Attempt:0,}" Jan 17 00:25:58.138009 containerd[1592]: time="2026-01-17T00:25:58.129335413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:58.138009 containerd[1592]: time="2026-01-17T00:25:58.132817683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:58.138009 containerd[1592]: time="2026-01-17T00:25:58.132837443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:58.138009 containerd[1592]: time="2026-01-17T00:25:58.133331639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:58.150514 containerd[1592]: time="2026-01-17T00:25:58.141721911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:25:58.150514 containerd[1592]: time="2026-01-17T00:25:58.142870962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:25:58.150514 containerd[1592]: time="2026-01-17T00:25:58.142891444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:58.150514 containerd[1592]: time="2026-01-17T00:25:58.146159772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:25:58.498013 containerd[1592]: time="2026-01-17T00:25:58.497158142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdzrv,Uid:9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\"" Jan 17 00:25:58.512673 kubelet[2823]: E0117 00:25:58.507241 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:58.517641 containerd[1592]: time="2026-01-17T00:25:58.511487619Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:25:58.576375 containerd[1592]: time="2026-01-17T00:25:58.575391927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zfdkf,Uid:3bb28647-22cf-468a-9608-1a80b7f73111,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\"" Jan 17 00:25:58.578310 kubelet[2823]: E0117 00:25:58.576749 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:25:59.353555 kubelet[2823]: E0117 00:25:59.352630 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:00.033649 kubelet[2823]: E0117 00:26:00.033538 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:01.040257 kubelet[2823]: E0117 00:26:01.038085 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:01.528195 kubelet[2823]: E0117 00:26:01.523333 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:01.923357 kubelet[2823]: E0117 00:26:01.922063 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:02.041485 kubelet[2823]: E0117 00:26:02.041402 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:02.044474 kubelet[2823]: E0117 00:26:02.042158 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:12.915635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453241633.mount: Deactivated successfully. Jan 17 00:26:21.595527 containerd[1592]: time="2026-01-17T00:26:21.594106058Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:21.603779 containerd[1592]: time="2026-01-17T00:26:21.603511287Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:26:21.608876 containerd[1592]: time="2026-01-17T00:26:21.608699243Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:21.619256 containerd[1592]: time="2026-01-17T00:26:21.617362695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 23.105821475s" Jan 17 00:26:21.619256 containerd[1592]: time="2026-01-17T00:26:21.617407147Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:26:21.637224 containerd[1592]: time="2026-01-17T00:26:21.637171339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:26:21.696228 containerd[1592]: time="2026-01-17T00:26:21.696125879Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:26:21.747862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969788282.mount: Deactivated successfully. Jan 17 00:26:21.756754 containerd[1592]: time="2026-01-17T00:26:21.756509345Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\"" Jan 17 00:26:21.765571 containerd[1592]: time="2026-01-17T00:26:21.757462457Z" level=info msg="StartContainer for \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\"" Jan 17 00:26:21.967408 containerd[1592]: time="2026-01-17T00:26:21.967263153Z" level=info msg="StartContainer for \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\" returns successfully" Jan 17 00:26:22.309466 kubelet[2823]: E0117 00:26:22.309016 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:22.412988 containerd[1592]: time="2026-01-17T00:26:22.412534469Z" level=info msg="shim disconnected" id=f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e namespace=k8s.io Jan 17 00:26:22.412988 containerd[1592]: time="2026-01-17T00:26:22.412661294Z" level=warning msg="cleaning up after shim disconnected" id=f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e namespace=k8s.io Jan 17 00:26:22.412988 containerd[1592]: time="2026-01-17T00:26:22.412697378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:22.736765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e-rootfs.mount: Deactivated successfully. Jan 17 00:26:23.318560 kubelet[2823]: E0117 00:26:23.318125 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:23.328649 containerd[1592]: time="2026-01-17T00:26:23.328543363Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:26:23.359306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489308556.mount: Deactivated successfully. Jan 17 00:26:23.479680 containerd[1592]: time="2026-01-17T00:26:23.479287255Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\"" Jan 17 00:26:23.481058 containerd[1592]: time="2026-01-17T00:26:23.480267314Z" level=info msg="StartContainer for \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\"" Jan 17 00:26:23.693090 containerd[1592]: time="2026-01-17T00:26:23.692429979Z" level=info msg="StartContainer for \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\" returns successfully" Jan 17 00:26:23.742010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464318034.mount: Deactivated successfully. Jan 17 00:26:23.759647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:26:23.760772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:23.760877 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:23.777461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:23.832751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a-rootfs.mount: Deactivated successfully. Jan 17 00:26:23.836850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:23.885123 containerd[1592]: time="2026-01-17T00:26:23.884885058Z" level=info msg="shim disconnected" id=f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a namespace=k8s.io Jan 17 00:26:23.885123 containerd[1592]: time="2026-01-17T00:26:23.885002875Z" level=warning msg="cleaning up after shim disconnected" id=f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a namespace=k8s.io Jan 17 00:26:23.885123 containerd[1592]: time="2026-01-17T00:26:23.885012415Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:24.337090 kubelet[2823]: E0117 00:26:24.337018 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:24.341089 containerd[1592]: time="2026-01-17T00:26:24.340982971Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:26:24.411489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882511919.mount: Deactivated successfully. Jan 17 00:26:24.429174 containerd[1592]: time="2026-01-17T00:26:24.429055424Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\"" Jan 17 00:26:24.433618 containerd[1592]: time="2026-01-17T00:26:24.431730542Z" level=info msg="StartContainer for \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\"" Jan 17 00:26:24.622164 containerd[1592]: time="2026-01-17T00:26:24.621847271Z" level=info msg="StartContainer for \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\" returns successfully" Jan 17 00:26:24.732891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804-rootfs.mount: Deactivated successfully. Jan 17 00:26:24.742586 containerd[1592]: time="2026-01-17T00:26:24.742493263Z" level=info msg="shim disconnected" id=a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804 namespace=k8s.io Jan 17 00:26:24.742586 containerd[1592]: time="2026-01-17T00:26:24.742560324Z" level=warning msg="cleaning up after shim disconnected" id=a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804 namespace=k8s.io Jan 17 00:26:24.742586 containerd[1592]: time="2026-01-17T00:26:24.742572420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:25.354281 kubelet[2823]: E0117 00:26:25.353220 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:25.359830 containerd[1592]: time="2026-01-17T00:26:25.357162266Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:26:25.455248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39831565.mount: Deactivated successfully. Jan 17 00:26:25.519095 containerd[1592]: time="2026-01-17T00:26:25.517104947Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\"" Jan 17 00:26:25.525984 containerd[1592]: time="2026-01-17T00:26:25.524667123Z" level=info msg="StartContainer for \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\"" Jan 17 00:26:25.654510 containerd[1592]: time="2026-01-17T00:26:25.654333622Z" level=info msg="StartContainer for \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\" returns successfully" Jan 17 00:26:25.737006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205-rootfs.mount: Deactivated successfully. Jan 17 00:26:25.751012 containerd[1592]: time="2026-01-17T00:26:25.750633976Z" level=info msg="shim disconnected" id=212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205 namespace=k8s.io Jan 17 00:26:25.751012 containerd[1592]: time="2026-01-17T00:26:25.750703263Z" level=warning msg="cleaning up after shim disconnected" id=212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205 namespace=k8s.io Jan 17 00:26:25.751012 containerd[1592]: time="2026-01-17T00:26:25.750718285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:26:26.396070 kubelet[2823]: E0117 00:26:26.396011 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:26.406852 containerd[1592]: time="2026-01-17T00:26:26.406669765Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:26:26.468348 containerd[1592]: time="2026-01-17T00:26:26.467494100Z" level=info msg="CreateContainer within sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\"" Jan 17 00:26:26.471683 containerd[1592]: time="2026-01-17T00:26:26.471653671Z" level=info msg="StartContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\"" Jan 17 00:26:26.676536 containerd[1592]: time="2026-01-17T00:26:26.676161588Z" level=info msg="StartContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" returns successfully" Jan 17 00:26:26.776658 containerd[1592]: time="2026-01-17T00:26:26.775215415Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:26.776658 containerd[1592]: time="2026-01-17T00:26:26.776293185Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:26:26.783239 containerd[1592]: time="2026-01-17T00:26:26.783190773Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:26:26.791121 containerd[1592]: time="2026-01-17T00:26:26.790745756Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.153197046s" Jan 17 00:26:26.791121 containerd[1592]: time="2026-01-17T00:26:26.790804079Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:26:26.820361 containerd[1592]: time="2026-01-17T00:26:26.819779176Z" level=info msg="CreateContainer within sandbox \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:26:26.858635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073349567.mount: Deactivated successfully. Jan 17 00:26:26.870365 containerd[1592]: time="2026-01-17T00:26:26.870141389Z" level=info msg="CreateContainer within sandbox \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\"" Jan 17 00:26:26.871295 containerd[1592]: time="2026-01-17T00:26:26.871080529Z" level=info msg="StartContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\"" Jan 17 00:26:26.927729 kubelet[2823]: I0117 00:26:26.926463 2823 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:26:27.073426 containerd[1592]: time="2026-01-17T00:26:27.073379920Z" level=info msg="StartContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" returns successfully" Jan 17 00:26:27.133395 kubelet[2823]: I0117 00:26:27.132769 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4485h\" (UniqueName: \"kubernetes.io/projected/6610212c-185f-4c7c-ae49-985d38631da7-kube-api-access-4485h\") pod \"coredns-668d6bf9bc-72cvs\" (UID: \"6610212c-185f-4c7c-ae49-985d38631da7\") " pod="kube-system/coredns-668d6bf9bc-72cvs" Jan 17 00:26:27.133395 kubelet[2823]: I0117 00:26:27.132827 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/385c24be-6d08-4646-8749-901ee4100e4f-config-volume\") pod \"coredns-668d6bf9bc-g5r6n\" (UID: \"385c24be-6d08-4646-8749-901ee4100e4f\") " pod="kube-system/coredns-668d6bf9bc-g5r6n" Jan 17 00:26:27.133395 kubelet[2823]: I0117 00:26:27.132862 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6610212c-185f-4c7c-ae49-985d38631da7-config-volume\") pod \"coredns-668d6bf9bc-72cvs\" (UID: \"6610212c-185f-4c7c-ae49-985d38631da7\") " pod="kube-system/coredns-668d6bf9bc-72cvs" Jan 17 00:26:27.133395 kubelet[2823]: I0117 00:26:27.132891 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zt5s\" (UniqueName: \"kubernetes.io/projected/385c24be-6d08-4646-8749-901ee4100e4f-kube-api-access-9zt5s\") pod \"coredns-668d6bf9bc-g5r6n\" (UID: \"385c24be-6d08-4646-8749-901ee4100e4f\") " pod="kube-system/coredns-668d6bf9bc-g5r6n" Jan 17 00:26:27.339536 kubelet[2823]: E0117 00:26:27.337203 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:27.342049 kubelet[2823]: E0117 00:26:27.342017 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:27.349718 containerd[1592]: time="2026-01-17T00:26:27.348217833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5r6n,Uid:385c24be-6d08-4646-8749-901ee4100e4f,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:27.354596 containerd[1592]: time="2026-01-17T00:26:27.353821064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-72cvs,Uid:6610212c-185f-4c7c-ae49-985d38631da7,Namespace:kube-system,Attempt:0,}" Jan 17 00:26:27.405793 kubelet[2823]: E0117 00:26:27.404835 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:27.448724 kubelet[2823]: E0117 00:26:27.446776 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:27.713225 kubelet[2823]: I0117 00:26:27.710879 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zfdkf" podStartSLOduration=2.490839633 podStartE2EDuration="30.710854303s" podCreationTimestamp="2026-01-17 00:25:57 +0000 UTC" firstStartedPulling="2026-01-17 00:25:58.579417071 +0000 UTC m=+6.053610860" lastFinishedPulling="2026-01-17 00:26:26.799431741 +0000 UTC m=+34.273625530" observedRunningTime="2026-01-17 00:26:27.548335334 +0000 UTC m=+35.022529133" watchObservedRunningTime="2026-01-17 00:26:27.710854303 +0000 UTC m=+35.185048092" Jan 17 00:26:28.455517 kubelet[2823]: E0117 00:26:28.453809 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:28.455517 kubelet[2823]: E0117 00:26:28.454658 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:28.843266 systemd-journald[1176]: Under memory pressure, flushing caches. Jan 17 00:26:28.829430 systemd-resolved[1477]: Under memory pressure, flushing caches. Jan 17 00:26:28.829501 systemd-resolved[1477]: Flushed all caches. Jan 17 00:26:29.473741 kubelet[2823]: E0117 00:26:29.471808 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:31.355160 systemd-networkd[1254]: cilium_host: Link UP Jan 17 00:26:31.357665 systemd-networkd[1254]: cilium_net: Link UP Jan 17 00:26:31.358181 systemd-networkd[1254]: cilium_net: Gained carrier Jan 17 00:26:31.358472 systemd-networkd[1254]: cilium_host: Gained carrier Jan 17 00:26:31.742087 systemd-networkd[1254]: cilium_vxlan: Link UP Jan 17 00:26:31.742350 systemd-networkd[1254]: cilium_vxlan: Gained carrier Jan 17 00:26:31.967055 systemd-networkd[1254]: cilium_net: Gained IPv6LL Jan 17 00:26:32.152584 systemd-networkd[1254]: cilium_host: Gained IPv6LL Jan 17 00:26:32.349983 systemd[1]: run-containerd-runc-k8s.io-ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7-runc.xKRg2l.mount: Deactivated successfully. Jan 17 00:26:32.358036 kernel: NET: Registered PF_ALG protocol family Jan 17 00:26:33.496245 systemd-networkd[1254]: cilium_vxlan: Gained IPv6LL Jan 17 00:26:34.442558 systemd-networkd[1254]: lxc_health: Link UP Jan 17 00:26:34.510310 systemd-networkd[1254]: lxc_health: Gained carrier Jan 17 00:26:34.750957 systemd-networkd[1254]: lxc8d5c6c24b0a5: Link UP Jan 17 00:26:34.774986 kernel: eth0: renamed from tmpe7b38 Jan 17 00:26:34.790602 systemd-networkd[1254]: lxc8d5c6c24b0a5: Gained carrier Jan 17 00:26:35.228336 systemd-networkd[1254]: lxc6bf3531fc0f0: Link UP Jan 17 00:26:35.245749 kernel: eth0: renamed from tmp8b8a1 Jan 17 00:26:35.258001 systemd-networkd[1254]: lxc6bf3531fc0f0: Gained carrier Jan 17 00:26:35.822232 kubelet[2823]: E0117 00:26:35.820005 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:35.895406 kubelet[2823]: I0117 00:26:35.894357 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdzrv" podStartSLOduration=15.769039095 podStartE2EDuration="38.894335879s" podCreationTimestamp="2026-01-17 00:25:57 +0000 UTC" firstStartedPulling="2026-01-17 00:25:58.510848743 +0000 UTC m=+5.985042531" lastFinishedPulling="2026-01-17 00:26:21.636145527 +0000 UTC m=+29.110339315" observedRunningTime="2026-01-17 00:26:27.71846667 +0000 UTC m=+35.192660469" watchObservedRunningTime="2026-01-17 00:26:35.894335879 +0000 UTC m=+43.368529738" Jan 17 00:26:36.441064 systemd-networkd[1254]: lxc_health: Gained IPv6LL Jan 17 00:26:36.538999 kubelet[2823]: E0117 00:26:36.537422 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:36.571500 systemd-networkd[1254]: lxc6bf3531fc0f0: Gained IPv6LL Jan 17 00:26:36.760066 systemd-networkd[1254]: lxc8d5c6c24b0a5: Gained IPv6LL Jan 17 00:26:37.539738 kubelet[2823]: E0117 00:26:37.539637 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:42.157754 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 17 00:26:42.184349 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:42.197461 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:38518.service: Deactivated successfully. Jan 17 00:26:42.210050 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:26:42.211228 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:26:42.219568 systemd-logind[1570]: Removed session 9. Jan 17 00:26:44.361182 containerd[1592]: time="2026-01-17T00:26:44.360777213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:44.361182 containerd[1592]: time="2026-01-17T00:26:44.361034879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:44.361182 containerd[1592]: time="2026-01-17T00:26:44.361056026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:44.363169 containerd[1592]: time="2026-01-17T00:26:44.361170493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:44.404001 containerd[1592]: time="2026-01-17T00:26:44.403380609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:26:44.404001 containerd[1592]: time="2026-01-17T00:26:44.403484381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:26:44.404001 containerd[1592]: time="2026-01-17T00:26:44.403505799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:44.404001 containerd[1592]: time="2026-01-17T00:26:44.403735122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:26:44.493111 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:26:44.498628 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:26:44.597843 containerd[1592]: time="2026-01-17T00:26:44.595508595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-72cvs,Uid:6610212c-185f-4c7c-ae49-985d38631da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b8a1830260f6879a076c952d0338fce02f22db383fb230946a4b24ca935ea85\"" Jan 17 00:26:44.600647 kubelet[2823]: E0117 00:26:44.600473 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:44.608076 containerd[1592]: time="2026-01-17T00:26:44.605653979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g5r6n,Uid:385c24be-6d08-4646-8749-901ee4100e4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7b38cdbebaa6f09a449d6d2477311ea382da40e6053d36124f8848d758ca94a\"" Jan 17 00:26:44.608076 containerd[1592]: time="2026-01-17T00:26:44.606478994Z" level=info msg="CreateContainer within sandbox \"8b8a1830260f6879a076c952d0338fce02f22db383fb230946a4b24ca935ea85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:26:44.609301 kubelet[2823]: E0117 00:26:44.608829 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:44.614314 containerd[1592]: time="2026-01-17T00:26:44.613252781Z" level=info msg="CreateContainer within sandbox \"e7b38cdbebaa6f09a449d6d2477311ea382da40e6053d36124f8848d758ca94a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:26:44.734674 containerd[1592]: time="2026-01-17T00:26:44.731493714Z" level=info msg="CreateContainer within sandbox \"e7b38cdbebaa6f09a449d6d2477311ea382da40e6053d36124f8848d758ca94a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbd229798b7a50c500f7d28f1f9f2f38792a0c4714993be572122a567c035363\"" Jan 17 00:26:44.734674 containerd[1592]: time="2026-01-17T00:26:44.733391890Z" level=info msg="StartContainer for \"bbd229798b7a50c500f7d28f1f9f2f38792a0c4714993be572122a567c035363\"" Jan 17 00:26:44.786524 containerd[1592]: time="2026-01-17T00:26:44.786372655Z" level=info msg="CreateContainer within sandbox \"8b8a1830260f6879a076c952d0338fce02f22db383fb230946a4b24ca935ea85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d117dbb0e5dccb1affb86ed736af51a675695da11448214c91f81deac9c28da5\"" Jan 17 00:26:44.788629 containerd[1592]: time="2026-01-17T00:26:44.787535802Z" level=info msg="StartContainer for \"d117dbb0e5dccb1affb86ed736af51a675695da11448214c91f81deac9c28da5\"" Jan 17 00:26:44.985470 containerd[1592]: time="2026-01-17T00:26:44.985431195Z" level=info msg="StartContainer for \"bbd229798b7a50c500f7d28f1f9f2f38792a0c4714993be572122a567c035363\" returns successfully" Jan 17 00:26:45.053781 containerd[1592]: time="2026-01-17T00:26:45.053512034Z" level=info msg="StartContainer for \"d117dbb0e5dccb1affb86ed736af51a675695da11448214c91f81deac9c28da5\" returns successfully" Jan 17 00:26:45.393637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066659945.mount: Deactivated successfully. Jan 17 00:26:45.633245 kubelet[2823]: E0117 00:26:45.633208 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:45.658023 kubelet[2823]: E0117 00:26:45.657673 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:45.755289 kubelet[2823]: I0117 00:26:45.755178 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-72cvs" podStartSLOduration=50.755154087 podStartE2EDuration="50.755154087s" podCreationTimestamp="2026-01-17 00:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:45.74752611 +0000 UTC m=+53.221719919" watchObservedRunningTime="2026-01-17 00:26:45.755154087 +0000 UTC m=+53.229347876" Jan 17 00:26:45.755545 kubelet[2823]: I0117 00:26:45.755367 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g5r6n" podStartSLOduration=50.755359177 podStartE2EDuration="50.755359177s" podCreationTimestamp="2026-01-17 00:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:26:45.705504716 +0000 UTC m=+53.179698515" watchObservedRunningTime="2026-01-17 00:26:45.755359177 +0000 UTC m=+53.229552966" Jan 17 00:26:46.671819 kubelet[2823]: E0117 00:26:46.666554 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:46.671819 kubelet[2823]: E0117 00:26:46.667226 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:47.686312 kubelet[2823]: E0117 00:26:47.685717 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:26:57.762405 update_engine[1573]: I20260117 00:26:57.761562 1573 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:26:57.762405 update_engine[1573]: I20260117 00:26:57.761683 1573 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:26:57.766111 update_engine[1573]: I20260117 00:26:57.765220 1573 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:26:57.768548 update_engine[1573]: I20260117 00:26:57.767307 1573 omaha_request_params.cc:62] Current group set to lts Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786121 1573 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786188 1573 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786223 1573 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786331 1573 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786469 1573 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786485 1573 omaha_request_action.cc:272] Request: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: Jan 17 00:26:57.792280 update_engine[1573]: I20260117 00:26:57.786497 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:26:57.797304 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:26:57.807766 update_engine[1573]: I20260117 00:26:57.807716 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:26:57.812744 update_engine[1573]: I20260117 00:26:57.812300 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:26:57.837623 update_engine[1573]: E20260117 00:26:57.837438 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:26:57.837623 update_engine[1573]: I20260117 00:26:57.837633 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:27:07.754710 update_engine[1573]: I20260117 00:27:07.754434 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:27:07.757318 update_engine[1573]: I20260117 00:27:07.756419 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:27:07.758320 update_engine[1573]: I20260117 00:27:07.757407 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:27:07.776569 update_engine[1573]: E20260117 00:27:07.776191 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:27:07.776569 update_engine[1573]: I20260117 00:27:07.776372 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:27:07.840114 kubelet[2823]: E0117 00:27:07.839786 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:11.840765 kubelet[2823]: E0117 00:27:11.839485 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:11.840765 kubelet[2823]: E0117 00:27:11.839581 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:17.753741 update_engine[1573]: I20260117 00:27:17.753479 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:27:17.756489 update_engine[1573]: I20260117 00:27:17.756418 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:27:17.757636 update_engine[1573]: I20260117 00:27:17.757467 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:27:17.772741 update_engine[1573]: E20260117 00:27:17.772579 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:27:17.772741 update_engine[1573]: I20260117 00:27:17.772679 1573 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:27:22.842071 kubelet[2823]: E0117 00:27:22.841672 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:27.748475 update_engine[1573]: I20260117 00:27:27.748131 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:27:27.749154 update_engine[1573]: I20260117 00:27:27.748586 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:27:27.749154 update_engine[1573]: I20260117 00:27:27.748884 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:27:27.767296 update_engine[1573]: E20260117 00:27:27.766862 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:27:27.767296 update_engine[1573]: I20260117 00:27:27.767072 1573 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:27:27.767296 update_engine[1573]: I20260117 00:27:27.767092 1573 omaha_request_action.cc:617] Omaha request response: Jan 17 00:27:27.767296 update_engine[1573]: E20260117 00:27:27.767263 1573 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:27:27.767296 update_engine[1573]: I20260117 00:27:27.767294 1573 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:27:27.767614 update_engine[1573]: I20260117 00:27:27.767305 1573 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:27:27.767614 update_engine[1573]: I20260117 00:27:27.767313 1573 update_attempter.cc:306] Processing Done. Jan 17 00:27:27.767614 update_engine[1573]: E20260117 00:27:27.767333 1573 update_attempter.cc:619] Update failed. Jan 17 00:27:27.767614 update_engine[1573]: I20260117 00:27:27.767343 1573 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:27:27.767614 update_engine[1573]: I20260117 00:27:27.767354 1573 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:27:27.767614 update_engine[1573]: I20260117 00:27:27.767364 1573 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.769743 1573 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.769781 1573 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.769793 1573 omaha_request_action.cc:272] Request: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.769802 1573 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.770246 1573 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:27:27.773603 update_engine[1573]: I20260117 00:27:27.771332 1573 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:27:27.774082 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:27:27.801807 update_engine[1573]: E20260117 00:27:27.800575 1573 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802328 1573 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802419 1573 omaha_request_action.cc:617] Omaha request response: Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802440 1573 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802451 1573 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802462 1573 update_attempter.cc:306] Processing Done. Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802474 1573 update_attempter.cc:310] Error event sent. Jan 17 00:27:27.802683 update_engine[1573]: I20260117 00:27:27.802523 1573 update_check_scheduler.cc:74] Next update check in 44m26s Jan 17 00:27:27.807665 locksmithd[1632]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:27:29.841059 kubelet[2823]: E0117 00:27:29.840286 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:49.845373 kubelet[2823]: E0117 00:27:49.840103 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:51.840960 kubelet[2823]: E0117 00:27:51.838474 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:27:59.840387 kubelet[2823]: E0117 00:27:59.840003 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:11.973564 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:52794.service - OpenSSH per-connection server daemon (10.0.0.1:52794). Jan 17 00:28:12.047999 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 52794 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:12.051147 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:12.062533 systemd-logind[1570]: New session 10 of user core. Jan 17 00:28:12.074586 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:28:12.325477 sshd[4356]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:12.333215 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:52794.service: Deactivated successfully. Jan 17 00:28:12.342298 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:28:12.343684 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:28:12.346236 systemd-logind[1570]: Removed session 10. Jan 17 00:28:14.842741 kubelet[2823]: E0117 00:28:14.842629 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:17.408343 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:60430.service - OpenSSH per-connection server daemon (10.0.0.1:60430). Jan 17 00:28:17.570232 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 60430 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:17.596417 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:17.643097 systemd-logind[1570]: New session 11 of user core. Jan 17 00:28:17.664755 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:28:18.855259 sshd[4373]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:18.862306 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:60430.service: Deactivated successfully. Jan 17 00:28:18.869788 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:28:18.869848 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:28:18.902038 systemd-logind[1570]: Removed session 11. Jan 17 00:28:23.893434 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:52674.service - OpenSSH per-connection server daemon (10.0.0.1:52674). Jan 17 00:28:23.958971 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 52674 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:23.963849 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:23.997188 systemd-logind[1570]: New session 12 of user core. Jan 17 00:28:24.012821 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:28:24.330252 sshd[4390]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:24.356408 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:52674.service: Deactivated successfully. Jan 17 00:28:24.361804 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:28:24.362136 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:28:24.364547 systemd-logind[1570]: Removed session 12. Jan 17 00:28:24.838796 kubelet[2823]: E0117 00:28:24.838740 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:26.856402 kubelet[2823]: E0117 00:28:26.855818 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:29.353304 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:52686.service - OpenSSH per-connection server daemon (10.0.0.1:52686). Jan 17 00:28:29.444147 sshd[4408]: Accepted publickey for core from 10.0.0.1 port 52686 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:29.446055 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:29.465723 systemd-logind[1570]: New session 13 of user core. Jan 17 00:28:29.480433 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:28:29.708675 sshd[4408]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:29.715161 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:52686.service: Deactivated successfully. Jan 17 00:28:29.723705 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:28:29.742400 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:28:29.747114 systemd-logind[1570]: Removed session 13. Jan 17 00:28:31.838832 kubelet[2823]: E0117 00:28:31.838658 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:34.737277 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). Jan 17 00:28:34.783331 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:34.785271 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:34.799967 systemd-logind[1570]: New session 14 of user core. Jan 17 00:28:34.814224 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:28:35.053540 sshd[4425]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:35.084768 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:50000.service: Deactivated successfully. Jan 17 00:28:35.093747 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:28:35.095096 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:28:35.097980 systemd-logind[1570]: Removed session 14. Jan 17 00:28:37.839808 kubelet[2823]: E0117 00:28:37.839543 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:28:40.160489 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:50006.service - OpenSSH per-connection server daemon (10.0.0.1:50006). Jan 17 00:28:40.340038 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 50006 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:40.421401 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:41.073375 systemd-logind[1570]: New session 15 of user core. Jan 17 00:28:41.201781 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:28:43.514244 sshd[4442]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:43.534713 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:28:43.569550 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:53056.service - OpenSSH per-connection server daemon (10.0.0.1:53056). Jan 17 00:28:43.570405 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:50006.service: Deactivated successfully. Jan 17 00:28:43.578531 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:28:43.586635 systemd-logind[1570]: Removed session 15. Jan 17 00:28:43.640318 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 53056 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:43.644520 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:43.657312 systemd-logind[1570]: New session 16 of user core. Jan 17 00:28:43.675166 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:28:44.126143 sshd[4458]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:44.143191 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). Jan 17 00:28:44.144816 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:53056.service: Deactivated successfully. Jan 17 00:28:44.164165 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:28:44.174353 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:28:44.178021 systemd-logind[1570]: Removed session 16. Jan 17 00:28:44.246477 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:44.249778 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:44.262351 systemd-logind[1570]: New session 17 of user core. Jan 17 00:28:44.274276 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:28:44.567087 sshd[4473]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:44.578117 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:53062.service: Deactivated successfully. Jan 17 00:28:44.584585 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:28:44.585263 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:28:44.591083 systemd-logind[1570]: Removed session 17. Jan 17 00:28:49.611549 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:53064.service - OpenSSH per-connection server daemon (10.0.0.1:53064). Jan 17 00:28:49.677665 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 53064 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:49.680770 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:49.704073 systemd-logind[1570]: New session 18 of user core. Jan 17 00:28:49.715360 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:28:49.979368 sshd[4491]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:50.000263 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:53064.service: Deactivated successfully. Jan 17 00:28:50.005450 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:28:50.009120 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:28:50.011604 systemd-logind[1570]: Removed session 18. Jan 17 00:28:55.022573 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:43286.service - OpenSSH per-connection server daemon (10.0.0.1:43286). Jan 17 00:28:55.092425 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 43286 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:28:55.096261 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:55.107653 systemd-logind[1570]: New session 19 of user core. Jan 17 00:28:55.118547 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:28:55.437830 sshd[4512]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:55.452222 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:43286.service: Deactivated successfully. Jan 17 00:28:55.459149 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:28:55.460542 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:28:55.482391 systemd-logind[1570]: Removed session 19. Jan 17 00:29:00.474414 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:43298.service - OpenSSH per-connection server daemon (10.0.0.1:43298). Jan 17 00:29:00.594457 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 43298 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:00.614255 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:00.638703 systemd-logind[1570]: New session 20 of user core. Jan 17 00:29:00.650623 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:29:01.001601 sshd[4529]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:01.017644 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:43298.service: Deactivated successfully. Jan 17 00:29:01.034290 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:29:01.041071 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:29:01.044727 systemd-logind[1570]: Removed session 20. Jan 17 00:29:05.838797 kubelet[2823]: E0117 00:29:05.838728 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:06.032391 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:39910.service - OpenSSH per-connection server daemon (10.0.0.1:39910). Jan 17 00:29:06.097785 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 39910 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:06.104049 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:06.125007 systemd-logind[1570]: New session 21 of user core. Jan 17 00:29:06.139130 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:29:06.459066 sshd[4545]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:06.467827 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:39910.service: Deactivated successfully. Jan 17 00:29:06.481173 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:29:06.482234 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:29:06.485394 systemd-logind[1570]: Removed session 21. Jan 17 00:29:11.481456 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:39920.service - OpenSSH per-connection server daemon (10.0.0.1:39920). Jan 17 00:29:11.570718 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 39920 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:11.574316 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:11.605411 systemd-logind[1570]: New session 22 of user core. Jan 17 00:29:11.627757 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:29:11.964389 sshd[4560]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:11.974062 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:29:11.977684 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:39920.service: Deactivated successfully. Jan 17 00:29:11.991707 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:29:11.998484 systemd-logind[1570]: Removed session 22. Jan 17 00:29:16.985015 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Jan 17 00:29:17.061415 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:17.064270 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:17.082262 systemd-logind[1570]: New session 23 of user core. Jan 17 00:29:17.102069 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:29:17.339398 sshd[4575]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:17.357271 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:58524.service: Deactivated successfully. Jan 17 00:29:17.365815 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:29:17.368374 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:29:17.374694 systemd-logind[1570]: Removed session 23. Jan 17 00:29:18.841791 kubelet[2823]: E0117 00:29:18.841312 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:21.844638 kubelet[2823]: E0117 00:29:21.843622 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:22.370530 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:58526.service - OpenSSH per-connection server daemon (10.0.0.1:58526). Jan 17 00:29:22.468367 sshd[4590]: Accepted publickey for core from 10.0.0.1 port 58526 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:22.470266 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:22.482480 systemd-logind[1570]: New session 24 of user core. Jan 17 00:29:22.524861 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:29:22.886003 sshd[4590]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:22.920279 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:58526.service: Deactivated successfully. Jan 17 00:29:22.931283 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:29:22.934178 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:29:22.947185 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:46566.service - OpenSSH per-connection server daemon (10.0.0.1:46566). Jan 17 00:29:22.951215 systemd-logind[1570]: Removed session 24. Jan 17 00:29:23.013847 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:23.015316 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:23.026834 systemd-logind[1570]: New session 25 of user core. Jan 17 00:29:23.032284 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:29:23.863068 sshd[4605]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:23.886844 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:46568.service - OpenSSH per-connection server daemon (10.0.0.1:46568). Jan 17 00:29:23.888256 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:46566.service: Deactivated successfully. Jan 17 00:29:23.895314 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:29:23.900790 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:29:23.907451 systemd-logind[1570]: Removed session 25. Jan 17 00:29:23.968452 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 46568 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:23.971460 sshd[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:24.009549 systemd-logind[1570]: New session 26 of user core. Jan 17 00:29:24.019560 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:29:25.244380 sshd[4621]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:25.269690 systemd[1]: Started sshd@26-10.0.0.56:22-10.0.0.1:46576.service - OpenSSH per-connection server daemon (10.0.0.1:46576). Jan 17 00:29:25.270711 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:46568.service: Deactivated successfully. Jan 17 00:29:25.278035 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:29:25.279839 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:29:25.284063 systemd-logind[1570]: Removed session 26. Jan 17 00:29:25.362657 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 46576 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:25.368495 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:25.383053 systemd-logind[1570]: New session 27 of user core. Jan 17 00:29:25.397524 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:29:26.009259 sshd[4647]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:26.030126 systemd[1]: Started sshd@27-10.0.0.56:22-10.0.0.1:46590.service - OpenSSH per-connection server daemon (10.0.0.1:46590). Jan 17 00:29:26.031078 systemd[1]: sshd@26-10.0.0.56:22-10.0.0.1:46576.service: Deactivated successfully. Jan 17 00:29:26.038529 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:29:26.040358 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:29:26.049617 systemd-logind[1570]: Removed session 27. Jan 17 00:29:26.108432 sshd[4660]: Accepted publickey for core from 10.0.0.1 port 46590 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:26.110457 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:26.122263 systemd-logind[1570]: New session 28 of user core. Jan 17 00:29:26.132474 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:29:26.352205 sshd[4660]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:26.362060 systemd[1]: sshd@27-10.0.0.56:22-10.0.0.1:46590.service: Deactivated successfully. Jan 17 00:29:26.366642 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:29:26.369447 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:29:26.372040 systemd-logind[1570]: Removed session 28. Jan 17 00:29:31.383051 systemd[1]: Started sshd@28-10.0.0.56:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Jan 17 00:29:31.469505 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:31.473309 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:31.496011 systemd-logind[1570]: New session 29 of user core. Jan 17 00:29:31.513592 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:29:31.811400 sshd[4681]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:31.821705 systemd[1]: sshd@28-10.0.0.56:22-10.0.0.1:46602.service: Deactivated successfully. Jan 17 00:29:31.827276 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:29:31.827396 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:29:31.833456 systemd-logind[1570]: Removed session 29. Jan 17 00:29:36.831470 systemd[1]: Started sshd@29-10.0.0.56:22-10.0.0.1:52302.service - OpenSSH per-connection server daemon (10.0.0.1:52302). Jan 17 00:29:36.931241 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 52302 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:36.930770 sshd[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:36.957241 systemd-logind[1570]: New session 30 of user core. Jan 17 00:29:36.970431 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:29:37.205712 sshd[4697]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:37.220339 systemd[1]: sshd@29-10.0.0.56:22-10.0.0.1:52302.service: Deactivated successfully. Jan 17 00:29:37.224816 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:29:37.225757 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:29:37.231411 systemd-logind[1570]: Removed session 30. Jan 17 00:29:40.839620 kubelet[2823]: E0117 00:29:40.839239 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:42.221418 systemd[1]: Started sshd@30-10.0.0.56:22-10.0.0.1:52306.service - OpenSSH per-connection server daemon (10.0.0.1:52306). Jan 17 00:29:42.319234 sshd[4712]: Accepted publickey for core from 10.0.0.1 port 52306 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:42.322467 sshd[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:42.345236 systemd-logind[1570]: New session 31 of user core. Jan 17 00:29:42.360276 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:29:42.578405 sshd[4712]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:42.600878 systemd[1]: sshd@30-10.0.0.56:22-10.0.0.1:52306.service: Deactivated successfully. Jan 17 00:29:42.608488 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:29:42.608539 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:29:42.611726 systemd-logind[1570]: Removed session 31. Jan 17 00:29:47.608411 systemd[1]: Started sshd@31-10.0.0.56:22-10.0.0.1:51290.service - OpenSSH per-connection server daemon (10.0.0.1:51290). Jan 17 00:29:47.659082 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:47.663287 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:47.670554 systemd-logind[1570]: New session 32 of user core. Jan 17 00:29:47.682768 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:29:47.840009 kubelet[2823]: E0117 00:29:47.839103 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:47.902184 sshd[4731]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:47.910184 systemd[1]: sshd@31-10.0.0.56:22-10.0.0.1:51290.service: Deactivated successfully. Jan 17 00:29:47.918101 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:29:47.919990 systemd-logind[1570]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:29:47.922044 systemd-logind[1570]: Removed session 32. Jan 17 00:29:48.838054 kubelet[2823]: E0117 00:29:48.837871 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:51.841382 kubelet[2823]: E0117 00:29:51.838965 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:52.926416 systemd[1]: Started sshd@32-10.0.0.56:22-10.0.0.1:49166.service - OpenSSH per-connection server daemon (10.0.0.1:49166). Jan 17 00:29:52.986126 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 49166 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:52.988836 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:53.007020 systemd-logind[1570]: New session 33 of user core. Jan 17 00:29:53.014787 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:29:53.310316 sshd[4749]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:53.320675 systemd[1]: sshd@32-10.0.0.56:22-10.0.0.1:49166.service: Deactivated successfully. Jan 17 00:29:53.323780 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:29:53.329620 systemd-logind[1570]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:29:53.332173 systemd-logind[1570]: Removed session 33. Jan 17 00:29:57.837934 kubelet[2823]: E0117 00:29:57.837698 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:29:58.332462 systemd[1]: Started sshd@33-10.0.0.56:22-10.0.0.1:49178.service - OpenSSH per-connection server daemon (10.0.0.1:49178). Jan 17 00:29:58.398512 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 49178 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:58.402682 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:58.427161 systemd-logind[1570]: New session 34 of user core. Jan 17 00:29:58.439684 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:29:58.731178 sshd[4766]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:58.756346 systemd[1]: Started sshd@34-10.0.0.56:22-10.0.0.1:49180.service - OpenSSH per-connection server daemon (10.0.0.1:49180). Jan 17 00:29:58.757106 systemd[1]: sshd@33-10.0.0.56:22-10.0.0.1:49178.service: Deactivated successfully. Jan 17 00:29:58.765751 systemd-logind[1570]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:29:58.768200 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:29:58.776340 systemd-logind[1570]: Removed session 34. Jan 17 00:29:58.873391 sshd[4778]: Accepted publickey for core from 10.0.0.1 port 49180 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:29:58.896440 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:58.919672 systemd-logind[1570]: New session 35 of user core. Jan 17 00:29:58.933702 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:30:00.764015 containerd[1592]: time="2026-01-17T00:30:00.762554871Z" level=info msg="StopContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" with timeout 30 (s)" Jan 17 00:30:00.764015 containerd[1592]: time="2026-01-17T00:30:00.763289682Z" level=info msg="Stop container \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" with signal terminated" Jan 17 00:30:00.843978 containerd[1592]: time="2026-01-17T00:30:00.843622798Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:30:00.859151 containerd[1592]: time="2026-01-17T00:30:00.858958023Z" level=info msg="StopContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" with timeout 2 (s)" Jan 17 00:30:00.859493 containerd[1592]: time="2026-01-17T00:30:00.859455561Z" level=info msg="Stop container \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" with signal terminated" Jan 17 00:30:00.877595 systemd-networkd[1254]: lxc_health: Link DOWN Jan 17 00:30:00.877606 systemd-networkd[1254]: lxc_health: Lost carrier Jan 17 00:30:00.889131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af-rootfs.mount: Deactivated successfully. Jan 17 00:30:00.910317 containerd[1592]: time="2026-01-17T00:30:00.910235368Z" level=info msg="shim disconnected" id=8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af namespace=k8s.io Jan 17 00:30:00.910317 containerd[1592]: time="2026-01-17T00:30:00.910304250Z" level=warning msg="cleaning up after shim disconnected" id=8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af namespace=k8s.io Jan 17 00:30:00.910317 containerd[1592]: time="2026-01-17T00:30:00.910321761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:00.965288 containerd[1592]: time="2026-01-17T00:30:00.965228198Z" level=info msg="StopContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" returns successfully" Jan 17 00:30:00.965503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7-rootfs.mount: Deactivated successfully. Jan 17 00:30:00.974265 containerd[1592]: time="2026-01-17T00:30:00.974142985Z" level=info msg="StopPodSandbox for \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\"" Jan 17 00:30:00.974265 containerd[1592]: time="2026-01-17T00:30:00.974230290Z" level=info msg="Container to stop \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:00.978677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836-shm.mount: Deactivated successfully. Jan 17 00:30:00.991752 containerd[1592]: time="2026-01-17T00:30:00.991649966Z" level=info msg="shim disconnected" id=ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7 namespace=k8s.io Jan 17 00:30:00.991752 containerd[1592]: time="2026-01-17T00:30:00.991734567Z" level=warning msg="cleaning up after shim disconnected" id=ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7 namespace=k8s.io Jan 17 00:30:00.991752 containerd[1592]: time="2026-01-17T00:30:00.991748362Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:01.034519 containerd[1592]: time="2026-01-17T00:30:01.034107440Z" level=info msg="StopContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" returns successfully" Jan 17 00:30:01.034871 containerd[1592]: time="2026-01-17T00:30:01.034825573Z" level=info msg="StopPodSandbox for \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\"" Jan 17 00:30:01.035085 containerd[1592]: time="2026-01-17T00:30:01.034869041Z" level=info msg="Container to stop \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:01.035085 containerd[1592]: time="2026-01-17T00:30:01.034888235Z" level=info msg="Container to stop \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:01.035085 containerd[1592]: time="2026-01-17T00:30:01.034987372Z" level=info msg="Container to stop \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:01.035085 containerd[1592]: time="2026-01-17T00:30:01.035005375Z" level=info msg="Container to stop \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:01.035085 containerd[1592]: time="2026-01-17T00:30:01.035061915Z" level=info msg="Container to stop \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:30:01.073211 containerd[1592]: time="2026-01-17T00:30:01.072812761Z" level=info msg="shim disconnected" id=1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836 namespace=k8s.io Jan 17 00:30:01.073211 containerd[1592]: time="2026-01-17T00:30:01.072873340Z" level=warning msg="cleaning up after shim disconnected" id=1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836 namespace=k8s.io Jan 17 00:30:01.073211 containerd[1592]: time="2026-01-17T00:30:01.072887806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:01.116216 containerd[1592]: time="2026-01-17T00:30:01.113893616Z" level=info msg="TearDown network for sandbox \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\" successfully" Jan 17 00:30:01.116216 containerd[1592]: time="2026-01-17T00:30:01.116152103Z" level=info msg="StopPodSandbox for \"1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836\" returns successfully" Jan 17 00:30:01.133315 containerd[1592]: time="2026-01-17T00:30:01.132885380Z" level=info msg="shim disconnected" id=f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896 namespace=k8s.io Jan 17 00:30:01.133315 containerd[1592]: time="2026-01-17T00:30:01.133080880Z" level=warning msg="cleaning up after shim disconnected" id=f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896 namespace=k8s.io Jan 17 00:30:01.133315 containerd[1592]: time="2026-01-17T00:30:01.133151707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:01.190671 containerd[1592]: time="2026-01-17T00:30:01.190560151Z" level=info msg="TearDown network for sandbox \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" successfully" Jan 17 00:30:01.190671 containerd[1592]: time="2026-01-17T00:30:01.190644823Z" level=info msg="StopPodSandbox for \"f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896\" returns successfully" Jan 17 00:30:01.228686 kubelet[2823]: I0117 00:30:01.227766 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znmqf\" (UniqueName: \"kubernetes.io/projected/3bb28647-22cf-468a-9608-1a80b7f73111-kube-api-access-znmqf\") pod \"3bb28647-22cf-468a-9608-1a80b7f73111\" (UID: \"3bb28647-22cf-468a-9608-1a80b7f73111\") " Jan 17 00:30:01.228686 kubelet[2823]: I0117 00:30:01.227839 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb28647-22cf-468a-9608-1a80b7f73111-cilium-config-path\") pod \"3bb28647-22cf-468a-9608-1a80b7f73111\" (UID: \"3bb28647-22cf-468a-9608-1a80b7f73111\") " Jan 17 00:30:01.241579 kubelet[2823]: I0117 00:30:01.241392 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb28647-22cf-468a-9608-1a80b7f73111-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bb28647-22cf-468a-9608-1a80b7f73111" (UID: "3bb28647-22cf-468a-9608-1a80b7f73111"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:30:01.247353 kubelet[2823]: I0117 00:30:01.247225 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb28647-22cf-468a-9608-1a80b7f73111-kube-api-access-znmqf" (OuterVolumeSpecName: "kube-api-access-znmqf") pod "3bb28647-22cf-468a-9608-1a80b7f73111" (UID: "3bb28647-22cf-468a-9608-1a80b7f73111"). InnerVolumeSpecName "kube-api-access-znmqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329562 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-clustermesh-secrets\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329657 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hubble-tls\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329686 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-lib-modules\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329709 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-net\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329732 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-run\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.329790 kubelet[2823]: I0117 00:30:01.329760 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zmvj\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-kube-api-access-4zmvj\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329780 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hostproc\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329803 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-config-path\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329823 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-bpf-maps\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329842 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-etc-cni-netd\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329860 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cni-path\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330226 kubelet[2823]: I0117 00:30:01.329882 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-xtables-lock\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330127 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-cgroup\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330154 2823 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-kernel\") pod \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\" (UID: \"9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd\") " Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330212 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-znmqf\" (UniqueName: \"kubernetes.io/projected/3bb28647-22cf-468a-9608-1a80b7f73111-kube-api-access-znmqf\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330228 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb28647-22cf-468a-9608-1a80b7f73111-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330236 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hostproc" (OuterVolumeSpecName: "hostproc") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330360 kubelet[2823]: I0117 00:30:01.330261 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330489 kubelet[2823]: I0117 00:30:01.330285 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cni-path" (OuterVolumeSpecName: "cni-path") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330489 kubelet[2823]: I0117 00:30:01.330306 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330489 kubelet[2823]: I0117 00:30:01.330325 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330489 kubelet[2823]: I0117 00:30:01.330346 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.330489 kubelet[2823]: I0117 00:30:01.330404 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.331714 kubelet[2823]: I0117 00:30:01.331338 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.339207 kubelet[2823]: I0117 00:30:01.338374 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.339207 kubelet[2823]: I0117 00:30:01.338563 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:30:01.341423 kubelet[2823]: I0117 00:30:01.341387 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:30:01.348262 kubelet[2823]: I0117 00:30:01.348215 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:30:01.349041 kubelet[2823]: I0117 00:30:01.348845 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:01.349041 kubelet[2823]: I0117 00:30:01.348982 2823 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-kube-api-access-4zmvj" (OuterVolumeSpecName: "kube-api-access-4zmvj") pod "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" (UID: "9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd"). InnerVolumeSpecName "kube-api-access-4zmvj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431607 2823 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431687 2823 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431712 2823 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431730 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431749 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431765 2823 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431780 2823 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433205 kubelet[2823]: I0117 00:30:01.431794 2823 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431809 2823 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431821 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431834 2823 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431848 2823 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431862 2823 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.433609 kubelet[2823]: I0117 00:30:01.431874 2823 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4zmvj\" (UniqueName: \"kubernetes.io/projected/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd-kube-api-access-4zmvj\") on node \"localhost\" DevicePath \"\"" Jan 17 00:30:01.811327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896-rootfs.mount: Deactivated successfully. Jan 17 00:30:01.811636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6e8bed99cad12524cdb9175177ce76bd42d5ed852312ad5737cf21be2ed836-rootfs.mount: Deactivated successfully. Jan 17 00:30:01.811834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3ad264b9ffcd9fe2a9821a5db578f30bc33ffd729cfdedfbd417bbe0d52e896-shm.mount: Deactivated successfully. Jan 17 00:30:01.812440 systemd[1]: var-lib-kubelet-pods-3bb28647\x2d22cf\x2d468a\x2d9608\x2d1a80b7f73111-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dznmqf.mount: Deactivated successfully. Jan 17 00:30:01.813545 systemd[1]: var-lib-kubelet-pods-9bf2e48e\x2d9f69\x2d4d62\x2d8db6\x2d7ff6bdf596cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4zmvj.mount: Deactivated successfully. Jan 17 00:30:01.815158 systemd[1]: var-lib-kubelet-pods-9bf2e48e\x2d9f69\x2d4d62\x2d8db6\x2d7ff6bdf596cd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:30:01.815356 systemd[1]: var-lib-kubelet-pods-9bf2e48e\x2d9f69\x2d4d62\x2d8db6\x2d7ff6bdf596cd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:30:01.825100 kubelet[2823]: I0117 00:30:01.821000 2823 scope.go:117] "RemoveContainer" containerID="8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af" Jan 17 00:30:01.831814 containerd[1592]: time="2026-01-17T00:30:01.831437511Z" level=info msg="RemoveContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\"" Jan 17 00:30:01.852357 containerd[1592]: time="2026-01-17T00:30:01.850592686Z" level=info msg="RemoveContainer for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" returns successfully" Jan 17 00:30:01.858522 kubelet[2823]: I0117 00:30:01.858491 2823 scope.go:117] "RemoveContainer" containerID="8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af" Jan 17 00:30:01.859167 containerd[1592]: time="2026-01-17T00:30:01.859077956Z" level=error msg="ContainerStatus for \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\": not found" Jan 17 00:30:01.860626 kubelet[2823]: E0117 00:30:01.859661 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\": not found" containerID="8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af" Jan 17 00:30:01.860626 kubelet[2823]: I0117 00:30:01.859705 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af"} err="failed to get container status \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bf76a5582b761ce8455a0148d7cda6f5a63a006e2b35759591682acd77c82af\": not found" Jan 17 00:30:01.860626 kubelet[2823]: I0117 00:30:01.859806 2823 scope.go:117] "RemoveContainer" containerID="ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7" Jan 17 00:30:01.864504 containerd[1592]: time="2026-01-17T00:30:01.864340443Z" level=info msg="RemoveContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\"" Jan 17 00:30:01.876349 containerd[1592]: time="2026-01-17T00:30:01.876217518Z" level=info msg="RemoveContainer for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" returns successfully" Jan 17 00:30:01.876669 kubelet[2823]: I0117 00:30:01.876586 2823 scope.go:117] "RemoveContainer" containerID="212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205" Jan 17 00:30:01.884793 containerd[1592]: time="2026-01-17T00:30:01.880301346Z" level=info msg="RemoveContainer for \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\"" Jan 17 00:30:01.898099 containerd[1592]: time="2026-01-17T00:30:01.895781515Z" level=info msg="RemoveContainer for \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\" returns successfully" Jan 17 00:30:01.899673 kubelet[2823]: I0117 00:30:01.899034 2823 scope.go:117] "RemoveContainer" containerID="a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804" Jan 17 00:30:01.903022 containerd[1592]: time="2026-01-17T00:30:01.902559572Z" level=info msg="RemoveContainer for \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\"" Jan 17 00:30:01.919055 containerd[1592]: time="2026-01-17T00:30:01.918891565Z" level=info msg="RemoveContainer for \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\" returns successfully" Jan 17 00:30:01.919205 kubelet[2823]: I0117 00:30:01.919181 2823 scope.go:117] "RemoveContainer" containerID="f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a" Jan 17 00:30:01.924986 containerd[1592]: time="2026-01-17T00:30:01.924420543Z" level=info msg="RemoveContainer for \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\"" Jan 17 00:30:01.934027 containerd[1592]: time="2026-01-17T00:30:01.933507935Z" level=info msg="RemoveContainer for \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\" returns successfully" Jan 17 00:30:01.934091 kubelet[2823]: I0117 00:30:01.933725 2823 scope.go:117] "RemoveContainer" containerID="f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e" Jan 17 00:30:01.935315 containerd[1592]: time="2026-01-17T00:30:01.935174064Z" level=info msg="RemoveContainer for \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\"" Jan 17 00:30:01.957018 containerd[1592]: time="2026-01-17T00:30:01.956789259Z" level=info msg="RemoveContainer for \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\" returns successfully" Jan 17 00:30:01.957424 kubelet[2823]: I0117 00:30:01.957295 2823 scope.go:117] "RemoveContainer" containerID="ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7" Jan 17 00:30:01.958349 containerd[1592]: time="2026-01-17T00:30:01.958241577Z" level=error msg="ContainerStatus for \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\": not found" Jan 17 00:30:01.958549 kubelet[2823]: E0117 00:30:01.958514 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\": not found" containerID="ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7" Jan 17 00:30:01.959211 kubelet[2823]: I0117 00:30:01.958553 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7"} err="failed to get container status \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba0b723658460384d9cc397a85b4ffcf1fd2b4f1256f165beb3027fa8fd691b7\": not found" Jan 17 00:30:01.959211 kubelet[2823]: I0117 00:30:01.958588 2823 scope.go:117] "RemoveContainer" containerID="212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205" Jan 17 00:30:01.960233 kubelet[2823]: E0117 00:30:01.959599 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\": not found" containerID="212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205" Jan 17 00:30:01.960233 kubelet[2823]: I0117 00:30:01.959635 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205"} err="failed to get container status \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\": rpc error: code = NotFound desc = an error occurred when try to find container \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\": not found" Jan 17 00:30:01.960233 kubelet[2823]: I0117 00:30:01.959660 2823 scope.go:117] "RemoveContainer" containerID="a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804" Jan 17 00:30:01.960591 containerd[1592]: time="2026-01-17T00:30:01.959231776Z" level=error msg="ContainerStatus for \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"212f1cf7e3988e5a7e7883fac88c475c998172623df6044d7c2641f917ef2205\": not found" Jan 17 00:30:01.960591 containerd[1592]: time="2026-01-17T00:30:01.960087665Z" level=error msg="ContainerStatus for \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\": not found" Jan 17 00:30:01.960692 kubelet[2823]: E0117 00:30:01.960355 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\": not found" containerID="a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804" Jan 17 00:30:01.960692 kubelet[2823]: I0117 00:30:01.960403 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804"} err="failed to get container status \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1bd9dfd2290cef26d89b6f3f08a764f3639afef80fb62d78b7faa7df7ec1804\": not found" Jan 17 00:30:01.960692 kubelet[2823]: I0117 00:30:01.960439 2823 scope.go:117] "RemoveContainer" containerID="f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a" Jan 17 00:30:01.960822 containerd[1592]: time="2026-01-17T00:30:01.960716367Z" level=error msg="ContainerStatus for \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\": not found" Jan 17 00:30:01.962412 kubelet[2823]: E0117 00:30:01.961788 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\": not found" containerID="f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a" Jan 17 00:30:01.962753 kubelet[2823]: I0117 00:30:01.962428 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a"} err="failed to get container status \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f482004310e2c29152df1a5cbd4d8ec138f5214fe5c6809880bad7d03a16f43a\": not found" Jan 17 00:30:01.962867 kubelet[2823]: I0117 00:30:01.962762 2823 scope.go:117] "RemoveContainer" containerID="f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e" Jan 17 00:30:01.964197 containerd[1592]: time="2026-01-17T00:30:01.964104091Z" level=error msg="ContainerStatus for \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\": not found" Jan 17 00:30:01.965462 kubelet[2823]: E0117 00:30:01.965055 2823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\": not found" containerID="f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e" Jan 17 00:30:01.965462 kubelet[2823]: I0117 00:30:01.965169 2823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e"} err="failed to get container status \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f34d913463f2ab051cfb7f2905bbe4b6a1043d826b2c76465988fb9b346cac3e\": not found" Jan 17 00:30:02.693042 sshd[4778]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:02.705432 systemd[1]: Started sshd@35-10.0.0.56:22-10.0.0.1:43060.service - OpenSSH per-connection server daemon (10.0.0.1:43060). Jan 17 00:30:02.707161 systemd[1]: sshd@34-10.0.0.56:22-10.0.0.1:49180.service: Deactivated successfully. Jan 17 00:30:02.712221 systemd-logind[1570]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:30:02.713392 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:30:02.716630 systemd-logind[1570]: Removed session 35. Jan 17 00:30:02.773623 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 43060 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:02.775818 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:02.791518 systemd-logind[1570]: New session 36 of user core. Jan 17 00:30:02.803815 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 17 00:30:02.842516 kubelet[2823]: I0117 00:30:02.842413 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb28647-22cf-468a-9608-1a80b7f73111" path="/var/lib/kubelet/pods/3bb28647-22cf-468a-9608-1a80b7f73111/volumes" Jan 17 00:30:02.843520 kubelet[2823]: I0117 00:30:02.843382 2823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" path="/var/lib/kubelet/pods/9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd/volumes" Jan 17 00:30:03.281740 kubelet[2823]: E0117 00:30:03.280461 2823 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:30:03.463646 sshd[4951]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:03.486885 systemd[1]: Started sshd@36-10.0.0.56:22-10.0.0.1:43064.service - OpenSSH per-connection server daemon (10.0.0.1:43064). Jan 17 00:30:03.488186 systemd[1]: sshd@35-10.0.0.56:22-10.0.0.1:43060.service: Deactivated successfully. Jan 17 00:30:03.494343 systemd[1]: session-36.scope: Deactivated successfully. Jan 17 00:30:03.498196 systemd-logind[1570]: Session 36 logged out. Waiting for processes to exit. Jan 17 00:30:03.505021 systemd-logind[1570]: Removed session 36. Jan 17 00:30:03.520431 kubelet[2823]: I0117 00:30:03.519616 2823 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf2e48e-9f69-4d62-8db6-7ff6bdf596cd" containerName="cilium-agent" Jan 17 00:30:03.520431 kubelet[2823]: I0117 00:30:03.519892 2823 memory_manager.go:355] "RemoveStaleState removing state" podUID="3bb28647-22cf-468a-9608-1a80b7f73111" containerName="cilium-operator" Jan 17 00:30:03.574328 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 43064 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:03.576639 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:03.604433 systemd-logind[1570]: New session 37 of user core. Jan 17 00:30:03.612277 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 17 00:30:03.656877 kubelet[2823]: I0117 00:30:03.656783 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f1d0b19-6c0f-4d65-be9d-16900b903200-cilium-ipsec-secrets\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.657308 kubelet[2823]: I0117 00:30:03.657232 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-host-proc-sys-kernel\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.657308 kubelet[2823]: I0117 00:30:03.657283 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-host-proc-sys-net\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658206 kubelet[2823]: I0117 00:30:03.658096 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f1d0b19-6c0f-4d65-be9d-16900b903200-cilium-config-path\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658206 kubelet[2823]: I0117 00:30:03.658156 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f1d0b19-6c0f-4d65-be9d-16900b903200-clustermesh-secrets\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658206 kubelet[2823]: I0117 00:30:03.658190 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2d7v\" (UniqueName: \"kubernetes.io/projected/4f1d0b19-6c0f-4d65-be9d-16900b903200-kube-api-access-g2d7v\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658340 kubelet[2823]: I0117 00:30:03.658220 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-cilium-run\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658340 kubelet[2823]: I0117 00:30:03.658243 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-cilium-cgroup\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658340 kubelet[2823]: I0117 00:30:03.658263 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-etc-cni-netd\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658340 kubelet[2823]: I0117 00:30:03.658308 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-hostproc\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658340 kubelet[2823]: I0117 00:30:03.658331 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-bpf-maps\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658583 kubelet[2823]: I0117 00:30:03.658356 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-cni-path\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658583 kubelet[2823]: I0117 00:30:03.658378 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-lib-modules\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658583 kubelet[2823]: I0117 00:30:03.658401 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f1d0b19-6c0f-4d65-be9d-16900b903200-xtables-lock\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.658583 kubelet[2823]: I0117 00:30:03.658429 2823 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f1d0b19-6c0f-4d65-be9d-16900b903200-hubble-tls\") pod \"cilium-2n52g\" (UID: \"4f1d0b19-6c0f-4d65-be9d-16900b903200\") " pod="kube-system/cilium-2n52g" Jan 17 00:30:03.670779 sshd[4965]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:03.683635 systemd[1]: Started sshd@37-10.0.0.56:22-10.0.0.1:43066.service - OpenSSH per-connection server daemon (10.0.0.1:43066). Jan 17 00:30:03.684487 systemd[1]: sshd@36-10.0.0.56:22-10.0.0.1:43064.service: Deactivated successfully. Jan 17 00:30:03.689778 systemd-logind[1570]: Session 37 logged out. Waiting for processes to exit. Jan 17 00:30:03.691357 systemd[1]: session-37.scope: Deactivated successfully. Jan 17 00:30:03.693858 systemd-logind[1570]: Removed session 37. Jan 17 00:30:03.725340 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:30:03.729306 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:03.738324 systemd-logind[1570]: New session 38 of user core. Jan 17 00:30:03.744470 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 17 00:30:03.832751 kubelet[2823]: E0117 00:30:03.832506 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:03.834411 containerd[1592]: time="2026-01-17T00:30:03.833892134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n52g,Uid:4f1d0b19-6c0f-4d65-be9d-16900b903200,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:03.904439 containerd[1592]: time="2026-01-17T00:30:03.903407206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:03.904439 containerd[1592]: time="2026-01-17T00:30:03.903852590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:03.904439 containerd[1592]: time="2026-01-17T00:30:03.903958750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:03.904439 containerd[1592]: time="2026-01-17T00:30:03.904172194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:03.996677 containerd[1592]: time="2026-01-17T00:30:03.996637776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n52g,Uid:4f1d0b19-6c0f-4d65-be9d-16900b903200,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\"" Jan 17 00:30:03.999471 kubelet[2823]: E0117 00:30:03.998195 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:04.002836 containerd[1592]: time="2026-01-17T00:30:04.002613169Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:30:04.026399 containerd[1592]: time="2026-01-17T00:30:04.026250966Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb067782e304c4e1ef7c931c3cbe7f696e14797e8291c2ab3aa343a639d9a5d4\"" Jan 17 00:30:04.027965 containerd[1592]: time="2026-01-17T00:30:04.027842931Z" level=info msg="StartContainer for \"cb067782e304c4e1ef7c931c3cbe7f696e14797e8291c2ab3aa343a639d9a5d4\"" Jan 17 00:30:04.136487 containerd[1592]: time="2026-01-17T00:30:04.136199082Z" level=info msg="StartContainer for \"cb067782e304c4e1ef7c931c3cbe7f696e14797e8291c2ab3aa343a639d9a5d4\" returns successfully" Jan 17 00:30:04.215301 containerd[1592]: time="2026-01-17T00:30:04.212404859Z" level=info msg="shim disconnected" id=cb067782e304c4e1ef7c931c3cbe7f696e14797e8291c2ab3aa343a639d9a5d4 namespace=k8s.io Jan 17 00:30:04.215301 containerd[1592]: time="2026-01-17T00:30:04.212474734Z" level=warning msg="cleaning up after shim disconnected" id=cb067782e304c4e1ef7c931c3cbe7f696e14797e8291c2ab3aa343a639d9a5d4 namespace=k8s.io Jan 17 00:30:04.215301 containerd[1592]: time="2026-01-17T00:30:04.212487979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:04.861057 kubelet[2823]: E0117 00:30:04.861005 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:04.870081 containerd[1592]: time="2026-01-17T00:30:04.869971365Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:30:04.930517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454365865.mount: Deactivated successfully. Jan 17 00:30:04.948286 containerd[1592]: time="2026-01-17T00:30:04.947982499Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5\"" Jan 17 00:30:04.952953 containerd[1592]: time="2026-01-17T00:30:04.951237773Z" level=info msg="StartContainer for \"331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5\"" Jan 17 00:30:05.078607 containerd[1592]: time="2026-01-17T00:30:05.078464961Z" level=info msg="StartContainer for \"331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5\" returns successfully" Jan 17 00:30:05.142696 containerd[1592]: time="2026-01-17T00:30:05.142341844Z" level=info msg="shim disconnected" id=331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5 namespace=k8s.io Jan 17 00:30:05.142696 containerd[1592]: time="2026-01-17T00:30:05.142409516Z" level=warning msg="cleaning up after shim disconnected" id=331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5 namespace=k8s.io Jan 17 00:30:05.142696 containerd[1592]: time="2026-01-17T00:30:05.142418823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:05.777386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-331380ab9bc15cca436147e9cb1e136edb84d4b313e069d7ff1cf05eaf09e0e5-rootfs.mount: Deactivated successfully. Jan 17 00:30:05.876371 kubelet[2823]: E0117 00:30:05.876088 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:05.879609 containerd[1592]: time="2026-01-17T00:30:05.879502608Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:30:06.019307 containerd[1592]: time="2026-01-17T00:30:06.018199529Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b\"" Jan 17 00:30:06.020415 containerd[1592]: time="2026-01-17T00:30:06.019993619Z" level=info msg="StartContainer for \"aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b\"" Jan 17 00:30:06.306287 containerd[1592]: time="2026-01-17T00:30:06.306131415Z" level=info msg="StartContainer for \"aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b\" returns successfully" Jan 17 00:30:06.364977 containerd[1592]: time="2026-01-17T00:30:06.364724343Z" level=info msg="shim disconnected" id=aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b namespace=k8s.io Jan 17 00:30:06.364977 containerd[1592]: time="2026-01-17T00:30:06.364809387Z" level=warning msg="cleaning up after shim disconnected" id=aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b namespace=k8s.io Jan 17 00:30:06.364977 containerd[1592]: time="2026-01-17T00:30:06.364821248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:06.777286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aedca74c1de106506e47f4a8a3ddde0d153b361a4ed55f2bfefacfa929f78b7b-rootfs.mount: Deactivated successfully. Jan 17 00:30:06.899961 kubelet[2823]: E0117 00:30:06.899490 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:06.918345 containerd[1592]: time="2026-01-17T00:30:06.908418050Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:30:06.960777 containerd[1592]: time="2026-01-17T00:30:06.960369219Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909\"" Jan 17 00:30:06.964596 containerd[1592]: time="2026-01-17T00:30:06.964020850Z" level=info msg="StartContainer for \"9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909\"" Jan 17 00:30:07.066463 containerd[1592]: time="2026-01-17T00:30:07.065723924Z" level=info msg="StartContainer for \"9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909\" returns successfully" Jan 17 00:30:07.103258 containerd[1592]: time="2026-01-17T00:30:07.103160977Z" level=info msg="shim disconnected" id=9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909 namespace=k8s.io Jan 17 00:30:07.103258 containerd[1592]: time="2026-01-17T00:30:07.103228480Z" level=warning msg="cleaning up after shim disconnected" id=9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909 namespace=k8s.io Jan 17 00:30:07.103258 containerd[1592]: time="2026-01-17T00:30:07.103241143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:07.771985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f2a2d66e4ac929d011c79c41dc9069198c209a9a1cdbf2d4a2addee85023909-rootfs.mount: Deactivated successfully. Jan 17 00:30:07.814046 kubelet[2823]: I0117 00:30:07.813837 2823 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:30:07Z","lastTransitionTime":"2026-01-17T00:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:30:07.911309 kubelet[2823]: E0117 00:30:07.911222 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:07.914094 containerd[1592]: time="2026-01-17T00:30:07.914038085Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:30:07.943620 containerd[1592]: time="2026-01-17T00:30:07.943459962Z" level=info msg="CreateContainer within sandbox \"9f32ddee575d42feb6e92f975339f6d73936a584e242e7b9c8854afa60a1cfd4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4cd869b01f9d056dab43ba9cfaaf5f32c2a9d2aae1ddb7c6b65ab4cfcde3737a\"" Jan 17 00:30:07.944657 containerd[1592]: time="2026-01-17T00:30:07.944622007Z" level=info msg="StartContainer for \"4cd869b01f9d056dab43ba9cfaaf5f32c2a9d2aae1ddb7c6b65ab4cfcde3737a\"" Jan 17 00:30:08.075110 containerd[1592]: time="2026-01-17T00:30:08.074775360Z" level=info msg="StartContainer for \"4cd869b01f9d056dab43ba9cfaaf5f32c2a9d2aae1ddb7c6b65ab4cfcde3737a\" returns successfully" Jan 17 00:30:08.930201 kubelet[2823]: E0117 00:30:08.930138 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:08.973769 kubelet[2823]: I0117 00:30:08.973690 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2n52g" podStartSLOduration=5.9736684570000005 podStartE2EDuration="5.973668457s" podCreationTimestamp="2026-01-17 00:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:08.970404351 +0000 UTC m=+256.444598160" watchObservedRunningTime="2026-01-17 00:30:08.973668457 +0000 UTC m=+256.447862246" Jan 17 00:30:09.004572 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:30:09.936817 kubelet[2823]: E0117 00:30:09.936204 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:12.846653 kubelet[2823]: E0117 00:30:12.845105 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:15.734818 systemd-networkd[1254]: lxc_health: Link UP Jan 17 00:30:15.850603 kubelet[2823]: E0117 00:30:15.841115 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:15.867151 systemd-networkd[1254]: lxc_health: Gained carrier Jan 17 00:30:16.044819 kubelet[2823]: E0117 00:30:16.034864 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:17.030421 kubelet[2823]: E0117 00:30:17.029786 2823 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:30:17.626035 systemd-networkd[1254]: lxc_health: Gained IPv6LL Jan 17 00:30:23.517431 sshd[4974]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:23.527194 systemd[1]: sshd@37-10.0.0.56:22-10.0.0.1:43066.service: Deactivated successfully. Jan 17 00:30:23.533820 systemd[1]: session-38.scope: Deactivated successfully. Jan 17 00:30:23.537227 systemd-logind[1570]: Session 38 logged out. Waiting for processes to exit. Jan 17 00:30:23.539235 systemd-logind[1570]: Removed session 38.