Jan 23 19:17:05.692940 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:17:05.693061 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:17:05.693073 kernel: BIOS-provided physical RAM map: Jan 23 19:17:05.693083 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 19:17:05.693089 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 19:17:05.693094 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 19:17:05.693101 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 19:17:05.693107 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 19:17:05.693113 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 19:17:05.693118 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 19:17:05.693124 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:17:05.693132 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 19:17:05.693138 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 19:17:05.693144 kernel: NX (Execute Disable) protection: active Jan 23 19:17:05.693151 kernel: APIC: Static calls initialized Jan 23 19:17:05.693157 kernel: SMBIOS 2.8 present. Jan 23 19:17:05.693165 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 19:17:05.693172 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:17:05.693178 kernel: Hypervisor detected: KVM Jan 23 19:17:05.693184 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 19:17:05.693190 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:17:05.693196 kernel: kvm-clock: using sched offset of 10917770562 cycles Jan 23 19:17:05.693203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:17:05.693210 kernel: tsc: Detected 2445.426 MHz processor Jan 23 19:17:05.693216 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:17:05.693223 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:17:05.693232 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 19:17:05.693238 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 19:17:05.693244 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:17:05.693251 kernel: Using GB pages for direct mapping Jan 23 19:17:05.693257 kernel: ACPI: Early table checksum verification disabled Jan 23 19:17:05.693263 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 19:17:05.693270 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693276 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693282 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693291 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 19:17:05.693298 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693304 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693310 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693317 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:17:05.693326 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 19:17:05.693629 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 19:17:05.693640 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 19:17:05.693648 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 19:17:05.693654 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 19:17:05.693661 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 19:17:05.693668 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 19:17:05.693674 kernel: No NUMA configuration found Jan 23 19:17:05.693681 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 19:17:05.693691 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 19:17:05.693698 kernel: Zone ranges: Jan 23 19:17:05.693704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:17:05.693711 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 19:17:05.693717 kernel: Normal empty Jan 23 19:17:05.693724 kernel: Device empty Jan 23 19:17:05.693730 kernel: Movable zone start for each node Jan 23 19:17:05.693737 kernel: Early memory node ranges Jan 23 19:17:05.693744 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 19:17:05.693750 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 19:17:05.693759 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 19:17:05.693766 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:17:05.693773 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 19:17:05.693779 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 19:17:05.693786 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 19:17:05.693792 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:17:05.693799 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:17:05.693805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 19:17:05.693812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:17:05.693821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:17:05.693827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:17:05.693834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:17:05.693840 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:17:05.693847 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 19:17:05.693853 kernel: TSC deadline timer available Jan 23 19:17:05.693860 kernel: CPU topo: Max. logical packages: 1 Jan 23 19:17:05.693866 kernel: CPU topo: Max. logical dies: 1 Jan 23 19:17:05.693873 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:17:05.693881 kernel: CPU topo: Max. threads per core: 1 Jan 23 19:17:05.693888 kernel: CPU topo: Num. cores per package: 4 Jan 23 19:17:05.693894 kernel: CPU topo: Num. threads per package: 4 Jan 23 19:17:05.693900 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 19:17:05.693907 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:17:05.693914 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 19:17:05.693920 kernel: kvm-guest: setup PV sched yield Jan 23 19:17:05.693927 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 19:17:05.693933 kernel: Booting paravirtualized kernel on KVM Jan 23 19:17:05.693942 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:17:05.693949 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 19:17:05.693956 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 19:17:05.693962 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 19:17:05.693969 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 19:17:05.693975 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:17:05.693982 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:17:05.693990 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:17:05.693999 kernel: random: crng init done Jan 23 19:17:05.694005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 19:17:05.694012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:17:05.694018 kernel: Fallback order for Node 0: 0 Jan 23 19:17:05.694025 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 19:17:05.694031 kernel: Policy zone: DMA32 Jan 23 19:17:05.694038 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:17:05.694045 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 19:17:05.694051 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:17:05.694060 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:17:05.694066 kernel: Dynamic Preempt: voluntary Jan 23 19:17:05.694073 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:17:05.694080 kernel: rcu: RCU event tracing is enabled. Jan 23 19:17:05.694087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 19:17:05.694094 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:17:05.694100 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:17:05.694107 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:17:05.694113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:17:05.694120 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 19:17:05.694129 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:17:05.694136 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:17:05.694142 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 19:17:05.694149 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 19:17:05.694156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:17:05.694169 kernel: Console: colour VGA+ 80x25 Jan 23 19:17:05.694178 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:17:05.694185 kernel: ACPI: Core revision 20240827 Jan 23 19:17:05.694192 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 19:17:05.694199 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:17:05.694206 kernel: x2apic enabled Jan 23 19:17:05.694215 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:17:05.694222 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 19:17:05.694229 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 19:17:05.694236 kernel: kvm-guest: setup PV IPIs Jan 23 19:17:05.694243 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 19:17:05.694252 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:17:05.694259 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 23 19:17:05.694265 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:17:05.694273 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 19:17:05.694279 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 19:17:05.694286 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:17:05.694293 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:17:05.694300 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:17:05.694307 kernel: Speculative Store Bypass: Vulnerable Jan 23 19:17:05.694316 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 19:17:05.694324 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 19:17:05.694330 kernel: active return thunk: srso_alias_return_thunk Jan 23 19:17:05.694562 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 19:17:05.694572 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 19:17:05.694579 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 19:17:05.694586 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:17:05.694593 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:17:05.694603 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:17:05.694610 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:17:05.694617 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 19:17:05.694624 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:17:05.694631 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:17:05.694638 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:17:05.694644 kernel: landlock: Up and running. Jan 23 19:17:05.694651 kernel: SELinux: Initializing. Jan 23 19:17:05.694658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:17:05.694667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 19:17:05.694673 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 19:17:05.694680 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 19:17:05.694687 kernel: signal: max sigframe size: 1776 Jan 23 19:17:05.694694 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:17:05.694701 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:17:05.694708 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 19:17:05.694715 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:17:05.694722 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:17:05.694731 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:17:05.694738 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 19:17:05.694744 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 19:17:05.694751 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 23 19:17:05.694758 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Jan 23 19:17:05.694765 kernel: devtmpfs: initialized Jan 23 19:17:05.694772 kernel: x86/mm: Memory block size: 128MB Jan 23 19:17:05.694779 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:17:05.694786 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 19:17:05.694795 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:17:05.694802 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:17:05.694809 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:17:05.694816 kernel: audit: type=2000 audit(1769195812.480:1): state=initialized audit_enabled=0 res=1 Jan 23 19:17:05.694822 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:17:05.694829 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:17:05.694836 kernel: cpuidle: using governor menu Jan 23 19:17:05.694843 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:17:05.694850 kernel: dca service started, version 1.12.1 Jan 23 19:17:05.694859 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 19:17:05.694866 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 19:17:05.694873 kernel: PCI: Using configuration type 1 for base access Jan 23 19:17:05.694880 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:17:05.694886 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:17:05.694893 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:17:05.694900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:17:05.694907 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:17:05.694914 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:17:05.694923 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:17:05.694929 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:17:05.694936 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:17:05.694943 kernel: ACPI: Interpreter enabled Jan 23 19:17:05.694949 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 19:17:05.694956 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:17:05.694963 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:17:05.694970 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:17:05.694977 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 19:17:05.694986 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:17:05.695253 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:17:05.695663 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 19:17:05.695789 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 19:17:05.695799 kernel: PCI host bridge to bus 0000:00 Jan 23 19:17:05.696004 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:17:05.696124 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:17:05.696232 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:17:05.696559 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 19:17:05.696767 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 19:17:05.696881 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 19:17:05.696988 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:17:05.697214 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:17:05.697569 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:17:05.697696 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 19:17:05.697813 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 19:17:05.697928 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 19:17:05.698072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:17:05.698231 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 23 19:17:05.698753 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 19:17:05.698925 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 19:17:05.699095 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 19:17:05.699220 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 19:17:05.699611 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 19:17:05.699793 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 19:17:05.699962 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 19:17:05.700135 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 19:17:05.700326 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 19:17:05.700749 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 19:17:05.700921 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 19:17:05.701097 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 19:17:05.701272 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 19:17:05.701780 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:17:05.701954 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 19:17:05.702200 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 37109 usecs Jan 23 19:17:05.702927 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 19:17:05.703110 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 19:17:05.703285 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 19:17:05.703713 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 19:17:05.703877 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 19:17:05.703902 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:17:05.703913 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:17:05.703923 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:17:05.703932 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:17:05.703942 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 19:17:05.703955 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 19:17:05.703965 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 19:17:05.704065 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 19:17:05.704077 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 19:17:05.704095 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 19:17:05.704105 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 19:17:05.704114 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 19:17:05.704124 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 19:17:05.704134 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 19:17:05.704145 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 19:17:05.704158 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 19:17:05.704167 kernel: iommu: Default domain type: Translated Jan 23 19:17:05.704177 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:17:05.704190 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:17:05.704202 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:17:05.704213 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 19:17:05.704223 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 19:17:05.704638 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 19:17:05.704815 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 19:17:05.705085 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:17:05.705102 kernel: vgaarb: loaded Jan 23 19:17:05.705118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 19:17:05.705128 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 19:17:05.705140 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:17:05.705153 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:17:05.705165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:17:05.705174 kernel: pnp: PnP ACPI init Jan 23 19:17:05.705613 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 19:17:05.705635 kernel: pnp: PnP ACPI: found 6 devices Jan 23 19:17:05.705645 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:17:05.705659 kernel: NET: Registered PF_INET protocol family Jan 23 19:17:05.705669 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 19:17:05.705681 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 19:17:05.705692 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:17:05.705702 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:17:05.705711 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 19:17:05.705721 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 19:17:05.705733 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:17:05.705750 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 19:17:05.705759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:17:05.705769 kernel: NET: Registered PF_XDP protocol family Jan 23 19:17:05.706110 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:17:05.706275 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:17:05.706757 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:17:05.706910 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 19:17:05.707060 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 19:17:05.707217 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 19:17:05.707237 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:17:05.707248 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 23 19:17:05.707260 kernel: Initialise system trusted keyrings Jan 23 19:17:05.707273 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 19:17:05.707283 kernel: Key type asymmetric registered Jan 23 19:17:05.707293 kernel: Asymmetric key parser 'x509' registered Jan 23 19:17:05.707302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:17:05.707312 kernel: io scheduler mq-deadline registered Jan 23 19:17:05.707325 kernel: io scheduler kyber registered Jan 23 19:17:05.707587 kernel: io scheduler bfq registered Jan 23 19:17:05.707599 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:17:05.707614 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 19:17:05.707624 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 19:17:05.707634 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 19:17:05.707644 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:17:05.707653 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:17:05.707665 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:17:05.707676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:17:05.707690 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:17:05.707699 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:17:05.707877 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 19:17:05.708000 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 19:17:05.708126 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T19:17:03 UTC (1769195823) Jan 23 19:17:05.708289 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 19:17:05.708305 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 19:17:05.708324 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:17:05.708590 kernel: Segment Routing with IPv6 Jan 23 19:17:05.708605 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:17:05.708615 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:17:05.708625 kernel: Key type dns_resolver registered Jan 23 19:17:05.708636 kernel: IPI shorthand broadcast: enabled Jan 23 19:17:05.708649 kernel: sched_clock: Marking stable (9282103351, 986623512)->(11905484324, -1636757461) Jan 23 19:17:05.708659 kernel: registered taskstats version 1 Jan 23 19:17:05.708668 kernel: Loading compiled-in X.509 certificates Jan 23 19:17:05.708678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:17:05.708694 kernel: Demotion targets for Node 0: null Jan 23 19:17:05.708706 kernel: Key type .fscrypt registered Jan 23 19:17:05.708717 kernel: Key type fscrypt-provisioning registered Jan 23 19:17:05.708727 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:17:05.708737 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:17:05.708746 kernel: ima: No architecture policies found Jan 23 19:17:05.708758 kernel: clk: Disabling unused clocks Jan 23 19:17:05.708769 kernel: Warning: unable to open an initial console. Jan 23 19:17:05.708783 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:17:05.708792 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:17:05.708803 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:17:05.708814 kernel: Run /init as init process Jan 23 19:17:05.708825 kernel: with arguments: Jan 23 19:17:05.708836 kernel: /init Jan 23 19:17:05.708847 kernel: with environment: Jan 23 19:17:05.708859 kernel: HOME=/ Jan 23 19:17:05.708870 kernel: TERM=linux Jan 23 19:17:05.708883 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:17:05.708903 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:17:05.708914 systemd[1]: Detected virtualization kvm. Jan 23 19:17:05.708923 systemd[1]: Detected architecture x86-64. Jan 23 19:17:05.708933 systemd[1]: Running in initrd. Jan 23 19:17:05.709041 systemd[1]: No hostname configured, using default hostname. Jan 23 19:17:05.709052 systemd[1]: Hostname set to . Jan 23 19:17:05.709066 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:17:05.709093 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:17:05.709109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:17:05.709120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:17:05.709131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:17:05.709142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:17:05.709158 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:17:05.709172 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:17:05.709183 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:17:05.709194 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:17:05.709205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:17:05.709218 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:17:05.709229 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:17:05.709242 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:17:05.709254 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:17:05.709268 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:17:05.709279 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:17:05.709289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:17:05.709300 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:17:05.709313 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:17:05.709325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:17:05.709578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:17:05.709599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:17:05.709612 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:17:05.709622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:17:05.709632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:17:05.709643 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:17:05.709656 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:17:05.709670 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:17:05.709681 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:17:05.709696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:17:05.709706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:17:05.709843 systemd-journald[201]: Collecting audit messages is disabled. Jan 23 19:17:05.709879 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:17:05.709891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:17:05.710155 systemd-journald[201]: Journal started Jan 23 19:17:05.710184 systemd-journald[201]: Runtime Journal (/run/log/journal/51ac57fc8b2c401994d1b0a5db76ded3) is 6M, max 48.3M, 42.2M free. Jan 23 19:17:05.757952 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 19:17:05.779285 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:17:05.783323 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:17:05.802270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:17:05.820693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:17:05.925054 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:17:05.935955 kernel: Bridge firewalling registered Jan 23 19:17:05.935904 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 19:17:05.949602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:17:05.994091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:17:06.012833 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:17:06.034108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:17:06.975638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:17:06.987135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:17:07.016872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:17:07.070077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:17:07.089218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:17:07.100685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:17:07.166711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:17:07.217344 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:17:07.247128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:17:07.264309 systemd-resolved[235]: Positive Trust Anchors: Jan 23 19:17:07.264324 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:17:07.264651 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:17:07.272904 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 23 19:17:07.277025 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:17:07.344911 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:17:07.489870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:17:07.956110 kernel: SCSI subsystem initialized Jan 23 19:17:07.986680 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:17:08.053055 kernel: iscsi: registered transport (tcp) Jan 23 19:17:08.179665 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:17:08.179813 kernel: QLogic iSCSI HBA Driver Jan 23 19:17:08.323869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:17:08.394660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:17:08.428197 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:17:08.817043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:17:08.833057 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:17:08.992829 kernel: raid6: avx2x4 gen() 21618 MB/s Jan 23 19:17:09.015952 kernel: raid6: avx2x2 gen() 19410 MB/s Jan 23 19:17:09.046589 kernel: raid6: avx2x1 gen() 10532 MB/s Jan 23 19:17:09.046743 kernel: raid6: using algorithm avx2x4 gen() 21618 MB/s Jan 23 19:17:09.074081 kernel: raid6: .... xor() 4124 MB/s, rmw enabled Jan 23 19:17:09.074223 kernel: raid6: using avx2x2 recovery algorithm Jan 23 19:17:09.169962 kernel: xor: automatically using best checksumming function avx Jan 23 19:17:09.881183 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:17:09.914942 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:17:09.931088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:17:10.221271 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 23 19:17:10.231741 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:17:10.265679 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:17:10.352950 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 23 19:17:10.506197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:17:10.521185 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:17:10.718694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:17:10.726944 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:17:10.956816 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 19:17:10.979092 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:17:11.103703 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:17:11.103773 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 19:17:11.107291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:17:11.210194 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:17:11.210231 kernel: GPT:9289727 != 19775487 Jan 23 19:17:11.210246 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:17:11.210260 kernel: GPT:9289727 != 19775487 Jan 23 19:17:11.210273 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:17:11.210287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:17:11.107673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:17:11.194868 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:17:11.216963 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:17:11.249251 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:17:11.359782 kernel: libata version 3.00 loaded. Jan 23 19:17:11.404586 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 19:17:11.404925 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 19:17:11.430782 kernel: AES CTR mode by8 optimization enabled Jan 23 19:17:11.430851 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 19:17:11.447914 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 19:17:11.448240 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 19:17:11.473853 kernel: scsi host0: ahci Jan 23 19:17:11.490569 kernel: scsi host1: ahci Jan 23 19:17:11.621605 kernel: scsi host2: ahci Jan 23 19:17:11.633849 kernel: scsi host3: ahci Jan 23 19:17:11.650632 kernel: scsi host4: ahci Jan 23 19:17:11.658737 kernel: scsi host5: ahci Jan 23 19:17:11.711362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 19:17:11.752742 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 23 19:17:11.752777 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 23 19:17:11.752791 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 23 19:17:11.752804 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 23 19:17:11.752817 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 23 19:17:11.752833 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 23 19:17:11.764718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 19:17:12.960065 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 19:17:12.960117 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 19:17:12.960134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 19:17:12.960147 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 19:17:12.960161 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 19:17:12.960174 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:17:12.960202 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 19:17:12.960216 kernel: ata3.00: applying bridge limits Jan 23 19:17:12.960229 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 19:17:12.960245 kernel: ata3.00: configured for UDMA/100 Jan 23 19:17:12.960259 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 19:17:12.960934 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 19:17:12.960951 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 19:17:12.961155 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 19:17:12.961179 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 19:17:12.961251 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:17:12.991221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:17:13.017076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 19:17:13.053642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 19:17:13.099142 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:17:13.109712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:17:13.128882 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:17:13.150747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:17:13.175032 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:17:13.189891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:17:13.265288 disk-uuid[631]: Primary Header is updated. Jan 23 19:17:13.265288 disk-uuid[631]: Secondary Entries is updated. Jan 23 19:17:13.265288 disk-uuid[631]: Secondary Header is updated. Jan 23 19:17:13.300294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:17:13.303789 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:17:14.317699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:17:14.327086 disk-uuid[637]: The operation has completed successfully. Jan 23 19:17:14.446930 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:17:14.447887 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:17:14.550050 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:17:14.612078 sh[650]: Success Jan 23 19:17:14.715930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:17:14.716157 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:17:14.730789 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:17:14.796823 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 19:17:14.934703 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:17:14.951640 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:17:15.271604 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:17:15.320286 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (662) Jan 23 19:17:15.320327 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:17:15.320356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:17:15.429030 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:17:15.429275 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:17:15.434890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:17:15.459168 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:17:15.472060 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:17:15.474898 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:17:15.553150 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:17:15.705721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (685) Jan 23 19:17:15.723630 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:17:15.723699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:17:15.794061 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:17:15.794138 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:17:15.835816 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:17:15.856982 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:17:15.889998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:17:17.117962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:17:17.170688 ignition[748]: Ignition 2.22.0 Jan 23 19:17:17.170790 ignition[748]: Stage: fetch-offline Jan 23 19:17:17.170937 ignition[748]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:17.170953 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:17.184873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:17:17.171874 ignition[748]: parsed url from cmdline: "" Jan 23 19:17:17.171881 ignition[748]: no config URL provided Jan 23 19:17:17.171889 ignition[748]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:17:17.171907 ignition[748]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:17:17.171942 ignition[748]: op(1): [started] loading QEMU firmware config module Jan 23 19:17:17.171949 ignition[748]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 19:17:17.382806 ignition[748]: op(1): [finished] loading QEMU firmware config module Jan 23 19:17:17.409036 ignition[748]: parsing config with SHA512: 209e7d1f570cbfdfccfb38eb4901dd0b524d5df2f2986138ff17443c0aa74f6cbb097961a6001a9665bf5da156fc57596fb8c78914b27fe6cbfeb6e7108c0fab Jan 23 19:17:17.465002 unknown[748]: fetched base config from "system" Jan 23 19:17:17.465089 unknown[748]: fetched user config from "qemu" Jan 23 19:17:17.471912 ignition[748]: fetch-offline: fetch-offline passed Jan 23 19:17:17.481056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:17:17.473927 ignition[748]: Ignition finished successfully Jan 23 19:17:17.485390 systemd-networkd[837]: lo: Link UP Jan 23 19:17:17.485397 systemd-networkd[837]: lo: Gained carrier Jan 23 19:17:17.490278 systemd-networkd[837]: Enumeration completed Jan 23 19:17:17.493996 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:17:17.494003 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:17:17.499775 systemd-networkd[837]: eth0: Link UP Jan 23 19:17:17.499950 systemd-networkd[837]: eth0: Gained carrier Jan 23 19:17:17.499960 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:17:17.501316 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:17:17.515275 systemd[1]: Reached target network.target - Network. Jan 23 19:17:17.539716 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 19:17:17.754942 kernel: hrtimer: interrupt took 4931084 ns Jan 23 19:17:17.564198 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:17:17.586746 systemd-networkd[837]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:17:19.212156 systemd-networkd[837]: eth0: Gained IPv6LL Jan 23 19:17:19.405056 ignition[843]: Ignition 2.22.0 Jan 23 19:17:19.405366 ignition[843]: Stage: kargs Jan 23 19:17:19.446152 ignition[843]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:19.450159 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:19.471882 ignition[843]: kargs: kargs passed Jan 23 19:17:19.472356 ignition[843]: Ignition finished successfully Jan 23 19:17:19.513042 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:17:19.560344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:17:19.863910 ignition[853]: Ignition 2.22.0 Jan 23 19:17:19.864011 ignition[853]: Stage: disks Jan 23 19:17:19.864274 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:19.864288 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:19.866206 ignition[853]: disks: disks passed Jan 23 19:17:19.866251 ignition[853]: Ignition finished successfully Jan 23 19:17:19.916304 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:17:19.919023 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:17:19.939351 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:17:19.962743 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:17:19.995041 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:17:20.018291 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:17:20.051265 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:17:20.256882 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 19:17:20.277960 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:17:20.320397 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:17:21.082025 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:17:21.088923 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:17:21.103094 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:17:21.125103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:17:21.168801 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:17:21.193134 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:17:21.270318 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Jan 23 19:17:21.270362 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:17:21.270389 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:17:21.193314 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:17:21.193355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:17:21.351360 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:17:21.351783 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:17:21.361298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:17:21.388803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:17:21.424347 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:17:21.719056 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:17:21.794221 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:17:21.890144 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:17:21.954260 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:17:22.982165 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:17:23.010897 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:17:23.055380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:17:23.085821 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:17:23.108759 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:17:23.183360 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:17:23.724860 ignition[986]: INFO : Ignition 2.22.0 Jan 23 19:17:23.724860 ignition[986]: INFO : Stage: mount Jan 23 19:17:23.765055 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:23.765055 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:23.765055 ignition[986]: INFO : mount: mount passed Jan 23 19:17:23.871046 ignition[986]: INFO : Ignition finished successfully Jan 23 19:17:23.800151 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:17:23.870192 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:17:24.002065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:17:24.116084 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Jan 23 19:17:24.146142 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:17:24.146228 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:17:24.200646 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:17:24.200820 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:17:24.208067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:17:24.672648 ignition[1015]: INFO : Ignition 2.22.0 Jan 23 19:17:24.672648 ignition[1015]: INFO : Stage: files Jan 23 19:17:24.694170 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:24.694170 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:24.694170 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:17:24.694170 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:17:24.694170 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:17:24.775030 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:17:24.775030 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:17:24.807371 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:17:24.798934 unknown[1015]: wrote ssh authorized keys file for user: core Jan 23 19:17:24.832790 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:17:24.863181 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:17:24.863181 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:17:24.863181 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:17:24.863181 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:17:24.954283 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:17:24.954283 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:17:24.954283 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 19:17:25.615882 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 19:17:36.508568 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:17:36.508568 ignition[1015]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 23 19:17:36.579859 ignition[1015]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:17:36.579859 ignition[1015]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 19:17:36.579859 ignition[1015]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 23 19:17:36.579859 ignition[1015]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 19:17:37.199292 ignition[1015]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:17:37.378714 ignition[1015]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 19:17:37.410969 ignition[1015]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 19:17:37.431258 ignition[1015]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:17:37.431258 ignition[1015]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:17:37.431258 ignition[1015]: INFO : files: files passed Jan 23 19:17:37.431258 ignition[1015]: INFO : Ignition finished successfully Jan 23 19:17:37.439331 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:17:37.514056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:17:37.574167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:17:37.610180 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:17:37.681341 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:17:37.858901 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 19:17:37.927997 initrd-setup-root-after-ignition[1050]: grep: Jan 23 19:17:37.927997 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:17:37.977191 initrd-setup-root-after-ignition[1050]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:17:37.998808 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:17:37.980770 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:17:38.037186 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:17:38.067735 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:17:38.334074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:17:38.362374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:17:38.398755 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:17:38.401777 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:17:38.425310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:17:38.482956 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:17:38.615218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:17:38.628283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:17:38.771961 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:17:38.827216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:17:38.868165 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:17:38.885918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:17:38.886234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:17:38.941953 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:17:38.976028 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:17:39.004014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:17:39.025584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:17:39.059379 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:17:39.070854 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:17:39.153134 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:17:39.173955 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:17:39.252999 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:17:39.264558 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:17:39.265959 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:17:39.313354 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:17:39.313871 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:17:39.391831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:17:39.405210 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:17:39.464138 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:17:39.465241 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:17:39.496064 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:17:39.496246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:17:39.567864 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:17:39.568393 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:17:39.581784 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:17:39.634917 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:17:39.650074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:17:39.661956 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:17:39.704838 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:17:39.721851 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:17:39.722204 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:17:39.747750 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:17:39.747888 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:17:39.779148 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:17:39.779857 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:17:39.799795 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:17:39.800098 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:17:39.804180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:17:39.858154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:17:39.867291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:17:39.868015 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:17:39.925811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:17:39.925987 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:17:39.986362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:17:40.050971 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:17:40.112921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:17:40.145602 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:17:40.145928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:17:40.223597 ignition[1070]: INFO : Ignition 2.22.0 Jan 23 19:17:40.223597 ignition[1070]: INFO : Stage: umount Jan 23 19:17:40.263949 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:17:40.263949 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 19:17:40.263949 ignition[1070]: INFO : umount: umount passed Jan 23 19:17:40.263949 ignition[1070]: INFO : Ignition finished successfully Jan 23 19:17:40.267735 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:17:40.268015 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:17:40.293248 systemd[1]: Stopped target network.target - Network. Jan 23 19:17:40.302044 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:17:40.302178 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:17:40.376365 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:17:40.376791 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:17:40.547015 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:17:40.560087 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:17:40.579943 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:17:40.580121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:17:40.618137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:17:40.619823 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:17:40.656334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:17:40.695367 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:17:40.740758 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:17:40.741051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:17:40.806080 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:17:40.806836 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:17:40.807169 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:17:40.849319 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:17:40.851142 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:17:40.886313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:17:40.886585 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:17:40.972111 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:17:41.006173 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:17:41.006301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:17:41.052122 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:17:41.052362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:17:41.210105 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:17:41.210363 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:17:41.218009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:17:41.218100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:17:41.281822 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:17:41.333952 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:17:41.334065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:17:41.403159 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:17:41.416054 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:17:41.436150 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:17:41.436578 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:17:41.463219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:17:41.463298 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:17:41.563171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:17:41.563397 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:17:41.575901 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:17:41.578016 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:17:41.629611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:17:41.629826 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:17:41.670037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:17:41.670167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:17:41.715031 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:17:41.735988 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:17:41.736205 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:17:41.801344 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:17:41.802086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:17:41.854071 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 19:17:41.854263 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:17:41.904073 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:17:41.904172 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:17:41.972825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:17:41.973007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:17:42.010087 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:17:42.010270 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 19:17:42.010339 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:17:42.011051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:17:42.011740 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:17:42.012000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:17:42.063871 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:17:42.088039 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:17:42.190811 systemd[1]: Switching root. Jan 23 19:17:42.478245 systemd-journald[201]: Journal stopped Jan 23 19:17:48.885349 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 23 19:17:48.885629 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:17:48.885662 kernel: SELinux: policy capability open_perms=1 Jan 23 19:17:48.885682 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:17:48.885802 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:17:48.885823 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:17:48.885851 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:17:48.885876 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:17:48.885903 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:17:48.885921 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:17:48.885938 kernel: audit: type=1403 audit(1769195863.196:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:17:48.885965 systemd[1]: Successfully loaded SELinux policy in 314.213ms. Jan 23 19:17:48.886002 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 37.925ms. Jan 23 19:17:48.886024 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:17:48.886045 systemd[1]: Detected virtualization kvm. Jan 23 19:17:48.886185 systemd[1]: Detected architecture x86-64. Jan 23 19:17:48.886302 systemd[1]: Detected first boot. Jan 23 19:17:48.886327 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:17:48.886345 kernel: Guest personality initialized and is inactive Jan 23 19:17:48.886364 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:17:48.886379 kernel: Initialized host personality Jan 23 19:17:48.886395 zram_generator::config[1115]: No configuration found. Jan 23 19:17:48.886576 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:17:48.886593 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:17:48.886611 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:17:48.886622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:17:48.886632 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:17:48.886643 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:17:48.886654 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:17:48.886664 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:17:48.886675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:17:48.886685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:17:48.886812 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:17:48.886833 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:17:48.886853 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:17:48.886868 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:17:48.886993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:17:48.887006 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:17:48.887017 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:17:48.887027 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:17:48.887039 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:17:48.887054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:17:48.887065 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:17:48.887075 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:17:48.887087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:17:48.887098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:17:48.887108 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:17:48.887119 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:17:48.887130 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:17:48.887143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:17:48.887169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:17:48.887186 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:17:48.887204 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:17:48.887221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:17:48.887237 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:17:48.887253 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:17:48.887269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:17:48.887380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:17:48.887396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:17:48.887618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:17:48.887631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:17:48.887642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:17:48.887653 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:17:48.887664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:48.887674 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:17:48.887685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:17:48.887777 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:17:48.887795 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:17:48.887806 systemd[1]: Reached target machines.target - Containers. Jan 23 19:17:48.887817 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:17:48.887827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:17:48.887838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:17:48.887849 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:17:48.887859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:17:48.887870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:17:48.887884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:17:48.887895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:17:48.887905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:17:48.888000 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:17:48.888011 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:17:48.888022 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:17:48.888033 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:17:48.888043 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:17:48.888054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:17:48.888069 kernel: fuse: init (API version 7.41) Jan 23 19:17:48.888079 kernel: ACPI: bus type drm_connector registered Jan 23 19:17:48.888090 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:17:48.888100 kernel: loop: module loaded Jan 23 19:17:48.888111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:17:48.888122 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:17:48.888133 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:17:48.888144 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:17:48.888155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:17:48.888319 systemd-journald[1200]: Collecting audit messages is disabled. Jan 23 19:17:48.888359 systemd-journald[1200]: Journal started Jan 23 19:17:48.888390 systemd-journald[1200]: Runtime Journal (/run/log/journal/51ac57fc8b2c401994d1b0a5db76ded3) is 6M, max 48.3M, 42.2M free. Jan 23 19:17:46.058152 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:17:46.093173 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 19:17:46.098826 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:17:46.100262 systemd[1]: systemd-journald.service: Consumed 4.320s CPU time. Jan 23 19:17:48.915693 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:17:48.915851 systemd[1]: Stopped verity-setup.service. Jan 23 19:17:48.924968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:48.989260 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:17:49.077305 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:17:49.103674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:17:49.126149 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:17:49.197908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:17:49.248117 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:17:49.263387 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:17:49.276096 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:17:49.290157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:17:49.306097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:17:49.306960 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:17:49.326156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:17:49.326911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:17:49.350881 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:17:49.351358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:17:49.366869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:17:49.368065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:17:49.384027 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:17:49.384663 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:17:49.399362 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:17:49.400818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:17:49.417019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:17:49.436366 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:17:49.468013 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:17:49.489018 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:17:49.509232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:17:49.581252 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:17:49.615001 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:17:49.650203 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:17:49.664873 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:17:49.665278 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:17:49.687173 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:17:49.732204 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:17:49.752220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:17:49.758272 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:17:49.804135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:17:49.826631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:17:49.838952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:17:49.857630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:17:49.872940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:17:49.906021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:17:49.925174 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:17:49.953342 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:17:49.966678 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:17:50.081988 systemd-journald[1200]: Time spent on flushing to /var/log/journal/51ac57fc8b2c401994d1b0a5db76ded3 is 145.254ms for 968 entries. Jan 23 19:17:50.081988 systemd-journald[1200]: System Journal (/var/log/journal/51ac57fc8b2c401994d1b0a5db76ded3) is 8M, max 195.6M, 187.6M free. Jan 23 19:17:50.895933 systemd-journald[1200]: Received client request to flush runtime journal. Jan 23 19:17:50.907217 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 19:17:50.324396 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:17:50.352281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:17:50.376182 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:17:50.434633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:17:50.918652 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:17:50.995592 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:17:51.005282 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 23 19:17:51.005306 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 23 19:17:51.038057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:17:51.073830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:17:51.077831 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:17:51.082364 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 19:17:51.103866 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:17:51.222274 kernel: loop2: detected capacity change from 0 to 224512 Jan 23 19:17:51.360954 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:17:51.378029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:17:51.426945 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 19:17:51.885610 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 19:17:52.039852 kernel: loop5: detected capacity change from 0 to 224512 Jan 23 19:17:52.351393 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 19:17:52.351703 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 23 19:17:52.385661 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 19:17:52.387701 (sd-merge)[1260]: Merged extensions into '/usr'. Jan 23 19:17:52.416282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:17:52.459078 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:17:52.460293 systemd[1]: Reloading... Jan 23 19:17:53.105844 zram_generator::config[1285]: No configuration found. Jan 23 19:17:54.587048 systemd[1]: Reloading finished in 2125 ms. Jan 23 19:17:54.674749 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:17:54.758959 systemd[1]: Starting ensure-sysext.service... Jan 23 19:17:54.787983 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:17:54.898050 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:17:54.898165 systemd[1]: Reloading... Jan 23 19:17:54.909613 ldconfig[1230]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:17:55.664177 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:17:55.668628 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:17:55.670251 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:17:55.671218 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:17:55.673111 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:17:55.673637 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 19:17:55.673885 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 19:17:55.689761 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:17:55.689899 systemd-tmpfiles[1326]: Skipping /boot Jan 23 19:17:55.760139 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:17:55.760315 systemd-tmpfiles[1326]: Skipping /boot Jan 23 19:17:55.805690 zram_generator::config[1362]: No configuration found. Jan 23 19:17:56.437308 systemd[1]: Reloading finished in 1538 ms. Jan 23 19:17:56.467249 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:17:56.485032 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:17:56.498020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:17:56.552367 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:17:56.569618 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:17:56.588270 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:17:56.732308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:17:56.917136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:17:57.085298 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:17:57.163755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:57.164135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:17:57.174670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:17:57.200090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:17:57.224773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:17:57.240255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:17:57.240690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:17:57.241189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:57.277647 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:17:57.294196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:17:57.295123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:17:57.332090 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:17:57.370352 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:17:57.380183 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Jan 23 19:17:57.389884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:17:57.390223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:17:57.396325 augenrules[1423]: No rules Jan 23 19:17:57.407761 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:17:57.408211 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:17:57.421370 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:17:57.422276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:17:57.458275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:57.461953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:17:57.473225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:17:57.476766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:17:57.559110 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:17:57.576932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:17:57.597661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:17:57.610064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:17:57.610254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:17:57.618745 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:17:57.641346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:17:57.656086 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:17:57.672655 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:17:57.692070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:17:57.710883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:17:57.711933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:17:57.720582 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:17:57.721161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:17:57.741576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:17:57.746191 augenrules[1433]: /sbin/augenrules: No change Jan 23 19:17:57.763960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:17:57.801734 systemd[1]: Finished ensure-sysext.service. Jan 23 19:17:57.816733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:17:57.819354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:17:57.841682 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:17:57.889372 augenrules[1486]: No rules Jan 23 19:17:57.895231 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:17:57.896189 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:17:57.922351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:17:57.942979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:17:57.943092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:17:57.949160 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:17:57.963680 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:17:57.971725 systemd-resolved[1402]: Positive Trust Anchors: Jan 23 19:17:57.971745 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:17:57.971787 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:17:58.010331 systemd-resolved[1402]: Defaulting to hostname 'linux'. Jan 23 19:17:58.024209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:17:58.055997 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:17:58.263119 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:17:58.265348 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:17:58.279728 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:17:58.292956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:17:58.306678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:17:58.320965 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:17:58.335951 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:17:58.342277 systemd-networkd[1500]: lo: Link UP Jan 23 19:17:58.342284 systemd-networkd[1500]: lo: Gained carrier Jan 23 19:17:58.347119 systemd-networkd[1500]: Enumeration completed Jan 23 19:17:58.387738 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:17:58.352337 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:17:58.352395 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:17:58.364944 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:17:58.387647 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:17:58.387655 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:17:58.388253 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:17:58.394203 systemd-networkd[1500]: eth0: Link UP Jan 23 19:17:58.396029 systemd-networkd[1500]: eth0: Gained carrier Jan 23 19:17:58.396154 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:17:58.406065 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:17:58.425904 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:17:58.445675 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 19:17:58.461075 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:17:58.468061 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:17:58.489948 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 19:17:58.492170 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jan 23 19:17:58.500097 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 19:17:58.500301 systemd-timesyncd[1501]: Initial clock synchronization to Fri 2026-01-23 19:17:58.861214 UTC. Jan 23 19:17:58.504620 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:17:58.521689 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:17:58.535199 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:17:58.587334 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:17:59.498030 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:17:59.515754 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:17:59.542896 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:17:59.555352 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:17:59.595245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:17:59.617921 systemd[1]: Reached target network.target - Network. Jan 23 19:17:59.627846 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:17:59.639991 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:17:59.649978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:17:59.650274 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:17:59.654960 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:17:59.691322 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:17:59.706162 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:17:59.724197 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:17:59.744750 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:17:59.754893 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:17:59.764286 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:17:59.804062 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:17:59.830222 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:17:59.849102 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:17:59.854762 systemd-networkd[1500]: eth0: Gained IPv6LL Jan 23 19:17:59.869431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:17:59.901868 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:17:59.922948 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:17:59.945844 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:17:59.951901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:17:59.953877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:17:59.958724 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:17:59.968812 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jan 23 19:17:59.963201 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jan 23 19:17:59.975774 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:18:00.006987 jq[1533]: false Jan 23 19:18:00.010015 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:18:00.010631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:18:00.026662 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:18:00.027299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:18:00.052774 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:18:00.778061 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Jan 23 19:18:00.778174 oslogin_cache_refresh[1535]: Failure getting users, quitting Jan 23 19:18:00.778881 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:18:00.791642 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:18:00.793706 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Jan 23 19:18:00.791918 oslogin_cache_refresh[1535]: Refreshing group entry cache Jan 23 19:18:00.829067 oslogin_cache_refresh[1535]: Failure getting groups, quitting Jan 23 19:18:00.812284 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:18:00.841044 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Jan 23 19:18:00.841044 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:18:00.829091 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:18:00.884026 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:18:00.885144 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:18:00.937300 extend-filesystems[1534]: Found /dev/vda6 Jan 23 19:18:00.948017 update_engine[1544]: I20260123 19:18:00.938043 1544 main.cc:92] Flatcar Update Engine starting Jan 23 19:18:00.967373 jq[1546]: true Jan 23 19:18:01.002223 extend-filesystems[1534]: Found /dev/vda9 Jan 23 19:18:01.017780 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:18:01.118227 extend-filesystems[1534]: Checking size of /dev/vda9 Jan 23 19:18:01.140749 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:18:01.162739 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:18:01.209257 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:18:01.212974 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:18:01.220854 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:18:01.260851 jq[1568]: true Jan 23 19:18:01.244866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:18:01.267141 dbus-daemon[1530]: [system] SELinux support is enabled Jan 23 19:18:01.272892 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:18:01.290187 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:18:01.318183 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:18:01.320898 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:18:01.346921 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:18:01.407881 update_engine[1544]: I20260123 19:18:01.364611 1544 update_check_scheduler.cc:74] Next update check in 3m35s Jan 23 19:18:01.378073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:18:01.408085 extend-filesystems[1534]: Resized partition /dev/vda9 Jan 23 19:18:01.378114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:18:01.477296 extend-filesystems[1587]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:18:01.394340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:18:01.394657 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:18:01.654660 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:18:01.739387 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:18:01.797957 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:18:01.832007 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 19:18:01.916169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:18:01.931709 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:18:01.932081 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:18:01.947766 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:18:02.155702 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 19:18:02.211644 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:18:02.211644 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 19:18:02.211644 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 19:18:02.296818 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jan 23 19:18:02.221725 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:18:02.309256 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:18:02.222310 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:18:02.249605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:18:02.270121 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 19:18:03.114161 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 19:18:03.116628 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:18:03.118645 systemd-logind[1540]: New seat seat0. Jan 23 19:18:03.122674 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:18:03.757337 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:18:04.519756 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:18:04.758133 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:18:05.596608 kernel: kvm_amd: TSC scaling supported Jan 23 19:18:05.596782 kernel: kvm_amd: Nested Virtualization enabled Jan 23 19:18:05.596814 kernel: kvm_amd: Nested Paging enabled Jan 23 19:18:05.596869 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 19:18:05.596891 kernel: kvm_amd: PMU virtualization is disabled Jan 23 19:18:07.098039 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:18:07.207631 kernel: EDAC MC: Ver: 3.0.0 Jan 23 19:18:08.221872 containerd[1564]: time="2026-01-23T19:18:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:18:08.229737 containerd[1564]: time="2026-01-23T19:18:08.229013161Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:18:08.388806 containerd[1564]: time="2026-01-23T19:18:08.388351441Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="262.828µs" Jan 23 19:18:08.388806 containerd[1564]: time="2026-01-23T19:18:08.388771851Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:18:08.388992 containerd[1564]: time="2026-01-23T19:18:08.388895278Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:18:08.390107 containerd[1564]: time="2026-01-23T19:18:08.389954455Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:18:08.390107 containerd[1564]: time="2026-01-23T19:18:08.390086544Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:18:08.390360 containerd[1564]: time="2026-01-23T19:18:08.390232082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:18:08.392120 containerd[1564]: time="2026-01-23T19:18:08.391822979Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:18:08.392120 containerd[1564]: time="2026-01-23T19:18:08.391944355Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:18:08.392969 containerd[1564]: time="2026-01-23T19:18:08.392822916Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:18:08.392969 containerd[1564]: time="2026-01-23T19:18:08.392945215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:18:08.392969 containerd[1564]: time="2026-01-23T19:18:08.392966552Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:18:08.393069 containerd[1564]: time="2026-01-23T19:18:08.392980630Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:18:08.393989 containerd[1564]: time="2026-01-23T19:18:08.393699365Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:18:08.396228 containerd[1564]: time="2026-01-23T19:18:08.395819425Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:18:08.396228 containerd[1564]: time="2026-01-23T19:18:08.395957354Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:18:08.396228 containerd[1564]: time="2026-01-23T19:18:08.395973188Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:18:08.397118 containerd[1564]: time="2026-01-23T19:18:08.396954822Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:18:08.403333 containerd[1564]: time="2026-01-23T19:18:08.403184761Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:18:08.403833 containerd[1564]: time="2026-01-23T19:18:08.403629282Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:18:08.431537 containerd[1564]: time="2026-01-23T19:18:08.430926919Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:18:08.432239 containerd[1564]: time="2026-01-23T19:18:08.431863501Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:18:08.432239 containerd[1564]: time="2026-01-23T19:18:08.431994850Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:18:08.432239 containerd[1564]: time="2026-01-23T19:18:08.432023154Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:18:08.432366 containerd[1564]: time="2026-01-23T19:18:08.432241358Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:18:08.432366 containerd[1564]: time="2026-01-23T19:18:08.432271856Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:18:08.432599 containerd[1564]: time="2026-01-23T19:18:08.432394948Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:18:08.432632 containerd[1564]: time="2026-01-23T19:18:08.432605443Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:18:08.432682 containerd[1564]: time="2026-01-23T19:18:08.432627238Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:18:08.432682 containerd[1564]: time="2026-01-23T19:18:08.432648484Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:18:08.432682 containerd[1564]: time="2026-01-23T19:18:08.432670137Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:18:08.432789 containerd[1564]: time="2026-01-23T19:18:08.432694694Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:18:08.433827 containerd[1564]: time="2026-01-23T19:18:08.432910166Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:18:08.433827 containerd[1564]: time="2026-01-23T19:18:08.433317780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:18:08.433827 containerd[1564]: time="2026-01-23T19:18:08.433352717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:18:08.433827 containerd[1564]: time="2026-01-23T19:18:08.433377549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434647535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434763883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434787283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434803359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434820259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434834529Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:18:08.435059 containerd[1564]: time="2026-01-23T19:18:08.434848300Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:18:08.436041 containerd[1564]: time="2026-01-23T19:18:08.435620598Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:18:08.439093 containerd[1564]: time="2026-01-23T19:18:08.438594987Z" level=info msg="Start snapshots syncer" Jan 23 19:18:08.439137 containerd[1564]: time="2026-01-23T19:18:08.439097449Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:18:08.445235 containerd[1564]: time="2026-01-23T19:18:08.445078756Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:18:08.445235 containerd[1564]: time="2026-01-23T19:18:08.445156663Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:18:08.446911 containerd[1564]: time="2026-01-23T19:18:08.446843912Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:18:08.447651 containerd[1564]: time="2026-01-23T19:18:08.447055472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:18:08.447651 containerd[1564]: time="2026-01-23T19:18:08.447201548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:18:08.447651 containerd[1564]: time="2026-01-23T19:18:08.447221769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:18:08.447724 containerd[1564]: time="2026-01-23T19:18:08.447656327Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:18:08.447724 containerd[1564]: time="2026-01-23T19:18:08.447681493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:18:08.447724 containerd[1564]: time="2026-01-23T19:18:08.447697530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:18:08.447724 containerd[1564]: time="2026-01-23T19:18:08.447715090Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:18:08.448174 containerd[1564]: time="2026-01-23T19:18:08.447860515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:18:08.448174 containerd[1564]: time="2026-01-23T19:18:08.447990166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:18:08.448174 containerd[1564]: time="2026-01-23T19:18:08.448011262Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450027261Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450155625Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450171671Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450188357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450199570Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450213199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450238843Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450638779Z" level=info msg="runtime interface created" Jan 23 19:18:08.450653 containerd[1564]: time="2026-01-23T19:18:08.450653972Z" level=info msg="created NRI interface" Jan 23 19:18:08.454669 containerd[1564]: time="2026-01-23T19:18:08.450669349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:18:08.454669 containerd[1564]: time="2026-01-23T19:18:08.450690007Z" level=info msg="Connect containerd service" Jan 23 19:18:08.454669 containerd[1564]: time="2026-01-23T19:18:08.450825122Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:18:08.461396 containerd[1564]: time="2026-01-23T19:18:08.461095055Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:18:08.878074 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:18:10.135289 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:47712.service - OpenSSH per-connection server daemon (10.0.0.1:47712). Jan 23 19:18:11.422630 containerd[1564]: time="2026-01-23T19:18:11.422195223Z" level=info msg="Start subscribing containerd event" Jan 23 19:18:11.435729 containerd[1564]: time="2026-01-23T19:18:11.425278904Z" level=info msg="Start recovering state" Jan 23 19:18:11.441653 containerd[1564]: time="2026-01-23T19:18:11.440855686Z" level=info msg="Start event monitor" Jan 23 19:18:11.441653 containerd[1564]: time="2026-01-23T19:18:11.441153719Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:18:11.443009 containerd[1564]: time="2026-01-23T19:18:11.442235100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:18:11.445549 containerd[1564]: time="2026-01-23T19:18:11.444658898Z" level=info msg="Start streaming server" Jan 23 19:18:11.445549 containerd[1564]: time="2026-01-23T19:18:11.444932605Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:18:11.445549 containerd[1564]: time="2026-01-23T19:18:11.444952442Z" level=info msg="runtime interface starting up..." Jan 23 19:18:11.445549 containerd[1564]: time="2026-01-23T19:18:11.445054217Z" level=info msg="starting plugins..." Jan 23 19:18:11.445549 containerd[1564]: time="2026-01-23T19:18:11.445081718Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:18:11.446023 containerd[1564]: time="2026-01-23T19:18:11.445991033Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:18:11.449926 containerd[1564]: time="2026-01-23T19:18:11.449898541Z" level=info msg="containerd successfully booted in 3.233056s" Jan 23 19:18:11.817358 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:18:12.537224 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:18:12.538226 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:18:13.062845 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1001002431 wd_nsec: 1001001855 Jan 23 19:18:13.210537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:18:13.270152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:18:13.310537 sshd[1660]: Access denied for user core by PAM account configuration [preauth] Jan 23 19:18:13.334134 systemd[1]: sshd@0-10.0.0.101:22-10.0.0.1:47712.service: Deactivated successfully. Jan 23 19:18:13.870403 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:18:13.901291 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:18:13.921083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:18:13.933946 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:18:18.623078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:18:18.644343 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:18:18.674195 systemd[1]: Startup finished in 9.704s (kernel) + 38.772s (initrd) + 35.760s (userspace) = 1min 24.236s. Jan 23 19:18:18.680297 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:18:23.437118 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Jan 23 19:18:24.317123 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:24.334237 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:24.399230 kubelet[1683]: E0123 19:18:24.398807 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:18:24.418237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:18:24.419980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:18:24.421109 systemd[1]: kubelet.service: Consumed 14.233s CPU time, 265.4M memory peak. Jan 23 19:18:24.455047 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:18:24.793071 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:18:24.803827 systemd-logind[1540]: New session 1 of user core. Jan 23 19:18:24.911108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:18:24.923495 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:18:25.115056 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:18:25.139887 systemd-logind[1540]: New session c1 of user core. Jan 23 19:18:25.840321 systemd[1702]: Queued start job for default target default.target. Jan 23 19:18:25.938877 systemd[1702]: Created slice app.slice - User Application Slice. Jan 23 19:18:25.938968 systemd[1702]: Reached target paths.target - Paths. Jan 23 19:18:25.939097 systemd[1702]: Reached target timers.target - Timers. Jan 23 19:18:25.942971 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:18:26.026646 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:18:26.026875 systemd[1702]: Reached target sockets.target - Sockets. Jan 23 19:18:26.026932 systemd[1702]: Reached target basic.target - Basic System. Jan 23 19:18:26.026978 systemd[1702]: Reached target default.target - Main User Target. Jan 23 19:18:26.027016 systemd[1702]: Startup finished in 861ms. Jan 23 19:18:26.028316 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:18:26.053959 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:18:26.135771 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:40620.service - OpenSSH per-connection server daemon (10.0.0.1:40620). Jan 23 19:18:26.344715 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:26.348299 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:26.403628 systemd-logind[1540]: New session 2 of user core. Jan 23 19:18:26.421586 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:18:26.625387 sshd[1716]: Connection closed by 10.0.0.1 port 40620 Jan 23 19:18:26.627304 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:26.749984 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:40620.service: Deactivated successfully. Jan 23 19:18:26.771118 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:18:26.820282 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:18:26.843066 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Jan 23 19:18:26.858349 systemd-logind[1540]: Removed session 2. Jan 23 19:18:27.048763 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:27.054009 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:27.103550 systemd-logind[1540]: New session 3 of user core. Jan 23 19:18:27.121720 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:18:27.229965 sshd[1725]: Connection closed by 10.0.0.1 port 40630 Jan 23 19:18:27.229113 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:27.253098 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:40630.service: Deactivated successfully. Jan 23 19:18:27.259712 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:18:27.270238 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:18:27.280642 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Jan 23 19:18:27.288344 systemd-logind[1540]: Removed session 3. Jan 23 19:18:27.430954 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:27.444764 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:27.540549 systemd-logind[1540]: New session 4 of user core. Jan 23 19:18:27.549213 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:18:27.808245 sshd[1734]: Connection closed by 10.0.0.1 port 40640 Jan 23 19:18:27.809266 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:27.873383 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:40640.service: Deactivated successfully. Jan 23 19:18:27.940947 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 19:18:27.947851 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Jan 23 19:18:27.964792 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Jan 23 19:18:27.966322 systemd-logind[1540]: Removed session 4. Jan 23 19:18:28.128640 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:28.130701 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:28.146551 systemd-logind[1540]: New session 5 of user core. Jan 23 19:18:28.172035 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:18:28.324568 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:18:28.325255 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:18:28.367494 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 23 19:18:28.372176 sshd[1743]: Connection closed by 10.0.0.1 port 40642 Jan 23 19:18:28.373307 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:28.399873 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:40642.service: Deactivated successfully. Jan 23 19:18:28.406231 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 19:18:28.408236 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Jan 23 19:18:28.413273 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:40654.service - OpenSSH per-connection server daemon (10.0.0.1:40654). Jan 23 19:18:28.416869 systemd-logind[1540]: Removed session 5. Jan 23 19:18:28.838028 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:28.846250 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:28.885794 systemd-logind[1540]: New session 6 of user core. Jan 23 19:18:28.911276 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:18:29.029019 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:18:29.031327 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:18:29.128024 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 23 19:18:29.151370 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:18:29.153936 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:18:29.365310 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:18:29.644542 augenrules[1777]: No rules Jan 23 19:18:29.646673 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:18:29.647321 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:18:29.653972 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 23 19:18:29.662198 sshd[1753]: Connection closed by 10.0.0.1 port 40654 Jan 23 19:18:29.662018 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:29.707580 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:40654.service: Deactivated successfully. Jan 23 19:18:29.719027 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:18:29.721277 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:18:29.726676 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:40670.service - OpenSSH per-connection server daemon (10.0.0.1:40670). Jan 23 19:18:29.729612 systemd-logind[1540]: Removed session 6. Jan 23 19:18:29.961555 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 40670 ssh2: RSA SHA256:/r4j6Suw6o9GS3TlgTXA1bJK9h5rovJ7hazGEcBv9cY Jan 23 19:18:29.966380 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:18:30.010650 systemd-logind[1540]: New session 7 of user core. Jan 23 19:18:30.026075 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:18:30.129106 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:18:30.129988 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:18:30.169509 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 19:18:30.280621 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 19:18:30.281108 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 19:18:34.518390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:18:34.586142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:18:36.050281 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:18:36.050665 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:18:36.051808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:18:36.053247 systemd[1]: kubelet.service: Consumed 1.177s CPU time, 30M memory peak. Jan 23 19:18:36.059640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:18:36.125857 systemd[1]: Reload requested from client PID 1833 ('systemctl') (unit session-7.scope)... Jan 23 19:18:36.125932 systemd[1]: Reloading... Jan 23 19:18:36.720675 zram_generator::config[1876]: No configuration found. Jan 23 19:18:37.125103 systemd[1]: Reloading finished in 998 ms. Jan 23 19:18:37.223925 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:18:37.224253 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:18:37.227662 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:18:37.228247 systemd[1]: kubelet.service: Consumed 920ms CPU time, 82.3M memory peak. Jan 23 19:18:37.234032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:18:38.343350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:18:38.370548 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:18:38.779581 kubelet[1920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:18:38.779581 kubelet[1920]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:18:38.779581 kubelet[1920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:18:38.780610 kubelet[1920]: I0123 19:18:38.779727 1920 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:18:39.069944 kubelet[1920]: I0123 19:18:39.069709 1920 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 19:18:39.069944 kubelet[1920]: I0123 19:18:39.069790 1920 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:18:39.071768 kubelet[1920]: I0123 19:18:39.071693 1920 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 19:18:39.116367 kubelet[1920]: I0123 19:18:39.116204 1920 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:18:39.137850 kubelet[1920]: I0123 19:18:39.137746 1920 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:18:39.150931 kubelet[1920]: I0123 19:18:39.150806 1920 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:18:39.152093 kubelet[1920]: I0123 19:18:39.151903 1920 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:18:39.152876 kubelet[1920]: I0123 19:18:39.152047 1920 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.101","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:18:39.153067 kubelet[1920]: I0123 19:18:39.152942 1920 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:18:39.153067 kubelet[1920]: I0123 19:18:39.152959 1920 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 19:18:39.153809 kubelet[1920]: I0123 19:18:39.153546 1920 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:18:39.236128 kubelet[1920]: I0123 19:18:39.235919 1920 kubelet.go:446] "Attempting to sync node with API server" Jan 23 19:18:39.236310 kubelet[1920]: I0123 19:18:39.236167 1920 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:18:39.236563 kubelet[1920]: I0123 19:18:39.236316 1920 kubelet.go:352] "Adding apiserver pod source" Jan 23 19:18:39.236802 kubelet[1920]: I0123 19:18:39.236602 1920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:18:39.237244 kubelet[1920]: E0123 19:18:39.237097 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:39.237302 kubelet[1920]: E0123 19:18:39.237267 1920 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:39.246179 kubelet[1920]: I0123 19:18:39.246092 1920 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:18:39.247818 kubelet[1920]: I0123 19:18:39.247686 1920 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 19:18:39.248189 kubelet[1920]: W0123 19:18:39.248051 1920 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:18:39.255549 kubelet[1920]: I0123 19:18:39.255320 1920 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:18:39.255783 kubelet[1920]: I0123 19:18:39.255765 1920 server.go:1287] "Started kubelet" Jan 23 19:18:39.256130 kubelet[1920]: W0123 19:18:39.255907 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.101" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 23 19:18:39.256251 kubelet[1920]: E0123 19:18:39.256181 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.101\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 19:18:39.256656 kubelet[1920]: I0123 19:18:39.256310 1920 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:18:39.261040 kubelet[1920]: I0123 19:18:39.260927 1920 server.go:479] "Adding debug handlers to kubelet server" Jan 23 19:18:39.261536 kubelet[1920]: W0123 19:18:39.261265 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 23 19:18:39.261695 kubelet[1920]: E0123 19:18:39.261532 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 19:18:39.261695 kubelet[1920]: I0123 19:18:39.258331 1920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:18:39.262541 kubelet[1920]: I0123 19:18:39.262034 1920 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:18:39.264815 kubelet[1920]: I0123 19:18:39.263134 1920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:18:39.265608 kubelet[1920]: I0123 19:18:39.265046 1920 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:18:39.265608 kubelet[1920]: I0123 19:18:39.265204 1920 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:18:39.265608 kubelet[1920]: I0123 19:18:39.265572 1920 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:18:39.265880 kubelet[1920]: I0123 19:18:39.265698 1920 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:18:39.267739 kubelet[1920]: E0123 19:18:39.267518 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.268794 kubelet[1920]: I0123 19:18:39.268197 1920 factory.go:221] Registration of the systemd container factory successfully Jan 23 19:18:39.268794 kubelet[1920]: I0123 19:18:39.268588 1920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:18:39.275261 kubelet[1920]: I0123 19:18:39.275054 1920 factory.go:221] Registration of the containerd container factory successfully Jan 23 19:18:39.277608 kubelet[1920]: E0123 19:18:39.277343 1920 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:18:39.295549 kubelet[1920]: I0123 19:18:39.294695 1920 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:18:39.295549 kubelet[1920]: I0123 19:18:39.294725 1920 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:18:39.295549 kubelet[1920]: I0123 19:18:39.294854 1920 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:18:39.295549 kubelet[1920]: E0123 19:18:39.295017 1920 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.101\" not found" node="10.0.0.101" Jan 23 19:18:39.299527 kubelet[1920]: I0123 19:18:39.299281 1920 policy_none.go:49] "None policy: Start" Jan 23 19:18:39.299601 kubelet[1920]: I0123 19:18:39.299579 1920 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:18:39.301715 kubelet[1920]: I0123 19:18:39.299749 1920 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:18:39.316042 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:18:39.336552 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:18:39.344683 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:18:39.352751 kubelet[1920]: I0123 19:18:39.352676 1920 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 19:18:39.353217 kubelet[1920]: I0123 19:18:39.353155 1920 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:18:39.353274 kubelet[1920]: I0123 19:18:39.353214 1920 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:18:39.357077 kubelet[1920]: E0123 19:18:39.356977 1920 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:18:39.357317 kubelet[1920]: E0123 19:18:39.357199 1920 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.101\" not found" Jan 23 19:18:39.358700 kubelet[1920]: I0123 19:18:39.358568 1920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:18:39.408845 kubelet[1920]: I0123 19:18:39.408681 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 19:18:39.412691 kubelet[1920]: I0123 19:18:39.412610 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 19:18:39.412874 kubelet[1920]: I0123 19:18:39.412794 1920 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 19:18:39.413257 kubelet[1920]: I0123 19:18:39.413092 1920 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:18:39.413257 kubelet[1920]: I0123 19:18:39.413244 1920 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 19:18:39.413843 kubelet[1920]: E0123 19:18:39.413726 1920 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 19:18:39.455258 kubelet[1920]: I0123 19:18:39.455101 1920 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.101" Jan 23 19:18:39.471082 kubelet[1920]: I0123 19:18:39.470883 1920 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.101" Jan 23 19:18:39.471082 kubelet[1920]: E0123 19:18:39.470969 1920 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.101\": node \"10.0.0.101\" not found" Jan 23 19:18:39.519126 kubelet[1920]: E0123 19:18:39.519064 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.619886 kubelet[1920]: E0123 19:18:39.619533 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.721119 kubelet[1920]: E0123 19:18:39.720904 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.821937 kubelet[1920]: E0123 19:18:39.821601 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.910831 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 23 19:18:39.914344 sshd[1789]: Connection closed by 10.0.0.1 port 40670 Jan 23 19:18:39.915232 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 23 19:18:39.921577 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:40670.service: Deactivated successfully. Jan 23 19:18:39.922037 kubelet[1920]: E0123 19:18:39.921959 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:39.925622 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:18:39.926145 systemd[1]: session-7.scope: Consumed 5.828s CPU time, 79.7M memory peak. Jan 23 19:18:39.929601 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:18:39.932867 systemd-logind[1540]: Removed session 7. Jan 23 19:18:40.023331 kubelet[1920]: E0123 19:18:40.023154 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:40.079021 kubelet[1920]: I0123 19:18:40.078734 1920 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 19:18:40.079656 kubelet[1920]: W0123 19:18:40.079574 1920 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 19:18:40.079754 kubelet[1920]: W0123 19:18:40.079667 1920 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 19:18:40.124100 kubelet[1920]: E0123 19:18:40.123774 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:40.225245 kubelet[1920]: E0123 19:18:40.225056 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.101\" not found" Jan 23 19:18:40.238578 kubelet[1920]: E0123 19:18:40.238207 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:40.328236 kubelet[1920]: I0123 19:18:40.328082 1920 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 19:18:40.329235 containerd[1564]: time="2026-01-23T19:18:40.329117499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:18:40.329861 kubelet[1920]: I0123 19:18:40.329637 1920 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 19:18:41.238935 kubelet[1920]: I0123 19:18:41.238329 1920 apiserver.go:52] "Watching apiserver" Jan 23 19:18:41.238935 kubelet[1920]: E0123 19:18:41.238671 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:41.263991 systemd[1]: Created slice kubepods-besteffort-pod48efd2bd_e4d0_41ae_96b3_83143880ca2b.slice - libcontainer container kubepods-besteffort-pod48efd2bd_e4d0_41ae_96b3_83143880ca2b.slice. Jan 23 19:18:41.268225 kubelet[1920]: I0123 19:18:41.268146 1920 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282709 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-run\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282783 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-lib-modules\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282820 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48efd2bd-e4d0-41ae-96b3-83143880ca2b-xtables-lock\") pod \"kube-proxy-zz7r4\" (UID: \"48efd2bd-e4d0-41ae-96b3-83143880ca2b\") " pod="kube-system/kube-proxy-zz7r4" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282844 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48efd2bd-e4d0-41ae-96b3-83143880ca2b-lib-modules\") pod \"kube-proxy-zz7r4\" (UID: \"48efd2bd-e4d0-41ae-96b3-83143880ca2b\") " pod="kube-system/kube-proxy-zz7r4" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282872 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-hostproc\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.283688 kubelet[1920]: I0123 19:18:41.282897 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cni-path\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.282920 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-hubble-tls\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.283026 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclfb\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-kube-api-access-cclfb\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.283060 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48efd2bd-e4d0-41ae-96b3-83143880ca2b-kube-proxy\") pod \"kube-proxy-zz7r4\" (UID: \"48efd2bd-e4d0-41ae-96b3-83143880ca2b\") " pod="kube-system/kube-proxy-zz7r4" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.283084 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-etc-cni-netd\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.283110 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a6e129-a524-4227-9d26-57b0e408224e-clustermesh-secrets\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284045 kubelet[1920]: I0123 19:18:41.283133 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-net\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284345 kubelet[1920]: I0123 19:18:41.283156 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-bpf-maps\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284345 kubelet[1920]: I0123 19:18:41.283179 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-cgroup\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284345 kubelet[1920]: I0123 19:18:41.283213 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-xtables-lock\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284345 kubelet[1920]: I0123 19:18:41.283235 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a6e129-a524-4227-9d26-57b0e408224e-cilium-config-path\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284345 kubelet[1920]: I0123 19:18:41.283339 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-kernel\") pod \"cilium-ck9tr\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " pod="kube-system/cilium-ck9tr" Jan 23 19:18:41.284822 kubelet[1920]: I0123 19:18:41.283370 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b666\" (UniqueName: \"kubernetes.io/projected/48efd2bd-e4d0-41ae-96b3-83143880ca2b-kube-api-access-6b666\") pod \"kube-proxy-zz7r4\" (UID: \"48efd2bd-e4d0-41ae-96b3-83143880ca2b\") " pod="kube-system/kube-proxy-zz7r4" Jan 23 19:18:41.289770 systemd[1]: Created slice kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice - libcontainer container kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice. Jan 23 19:18:41.589548 containerd[1564]: time="2026-01-23T19:18:41.588857275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz7r4,Uid:48efd2bd-e4d0-41ae-96b3-83143880ca2b,Namespace:kube-system,Attempt:0,}" Jan 23 19:18:41.615070 containerd[1564]: time="2026-01-23T19:18:41.614920077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck9tr,Uid:10a6e129-a524-4227-9d26-57b0e408224e,Namespace:kube-system,Attempt:0,}" Jan 23 19:18:42.240143 kubelet[1920]: E0123 19:18:42.240025 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:42.258547 containerd[1564]: time="2026-01-23T19:18:42.258219832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:18:42.259098 containerd[1564]: time="2026-01-23T19:18:42.258776516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 19:18:42.260775 containerd[1564]: time="2026-01-23T19:18:42.260601640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:18:42.262788 containerd[1564]: time="2026-01-23T19:18:42.262679219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 19:18:42.264454 containerd[1564]: time="2026-01-23T19:18:42.264306579Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:18:42.268616 containerd[1564]: time="2026-01-23T19:18:42.268378256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:18:42.269747 containerd[1564]: time="2026-01-23T19:18:42.269613927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 648.411185ms" Jan 23 19:18:42.272713 containerd[1564]: time="2026-01-23T19:18:42.272528448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 668.551234ms" Jan 23 19:18:42.334714 containerd[1564]: time="2026-01-23T19:18:42.334582032Z" level=info msg="connecting to shim 6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:18:42.334904 containerd[1564]: time="2026-01-23T19:18:42.334839285Z" level=info msg="connecting to shim e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8" address="unix:///run/containerd/s/bd20e9b7331bdf04b19d4153619e3f4f4f81af8af93b9614cf72f0021045654e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:18:42.400588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557673746.mount: Deactivated successfully. Jan 23 19:18:42.422824 systemd[1]: Started cri-containerd-6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0.scope - libcontainer container 6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0. Jan 23 19:18:42.428203 systemd[1]: Started cri-containerd-e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8.scope - libcontainer container e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8. Jan 23 19:18:42.504702 containerd[1564]: time="2026-01-23T19:18:42.504552474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ck9tr,Uid:10a6e129-a524-4227-9d26-57b0e408224e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\"" Jan 23 19:18:42.515199 containerd[1564]: time="2026-01-23T19:18:42.513757990Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 19:18:42.523577 containerd[1564]: time="2026-01-23T19:18:42.523343669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz7r4,Uid:48efd2bd-e4d0-41ae-96b3-83143880ca2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8\"" Jan 23 19:18:43.244202 kubelet[1920]: E0123 19:18:43.242917 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:44.695830 kubelet[1920]: E0123 19:18:44.683917 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:45.856983 kubelet[1920]: E0123 19:18:45.837375 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:46.782383 update_engine[1544]: I20260123 19:18:46.491139 1544 update_attempter.cc:509] Updating boot flags... Jan 23 19:18:50.233762 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2879213848 wd_nsec: 2879212537 Jan 23 19:18:50.236760 kubelet[1920]: E0123 19:18:47.757572 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:50.271913 kubelet[1920]: E0123 19:18:50.271728 1920 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.403s" Jan 23 19:18:51.954671 kubelet[1920]: E0123 19:18:51.938222 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:52.956806 kubelet[1920]: E0123 19:18:52.953396 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:53.394491 kubelet[1920]: E0123 19:18:53.393025 1920 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.195s" Jan 23 19:18:53.995375 kubelet[1920]: E0123 19:18:53.979728 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:55.017394 kubelet[1920]: E0123 19:18:55.016328 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:56.018643 kubelet[1920]: E0123 19:18:56.017939 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:57.024253 kubelet[1920]: E0123 19:18:57.023059 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:58.029349 kubelet[1920]: E0123 19:18:58.028694 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:59.030745 kubelet[1920]: E0123 19:18:59.029952 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:18:59.257289 kubelet[1920]: E0123 19:18:59.255190 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:00.042266 kubelet[1920]: E0123 19:19:00.041216 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:01.074002 kubelet[1920]: E0123 19:19:01.045901 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:02.050070 kubelet[1920]: E0123 19:19:02.048314 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:03.054045 kubelet[1920]: E0123 19:19:03.052260 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:04.061668 kubelet[1920]: E0123 19:19:04.060317 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:05.063985 kubelet[1920]: E0123 19:19:05.063250 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:06.065914 kubelet[1920]: E0123 19:19:06.065670 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:06.374397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675493249.mount: Deactivated successfully. Jan 23 19:19:07.076930 kubelet[1920]: E0123 19:19:07.075202 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:08.078200 kubelet[1920]: E0123 19:19:08.077672 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:09.079606 kubelet[1920]: E0123 19:19:09.079057 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:10.085387 kubelet[1920]: E0123 19:19:10.083735 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:11.101251 kubelet[1920]: E0123 19:19:11.094964 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:12.121013 kubelet[1920]: E0123 19:19:12.120606 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:13.144245 kubelet[1920]: E0123 19:19:13.134355 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:14.158926 kubelet[1920]: E0123 19:19:14.158171 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:15.159042 kubelet[1920]: E0123 19:19:15.158594 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:16.349539 kubelet[1920]: E0123 19:19:16.348699 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:17.350668 kubelet[1920]: E0123 19:19:17.350173 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:18.354734 kubelet[1920]: E0123 19:19:18.354050 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:19.244262 kubelet[1920]: E0123 19:19:19.240337 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:19.354726 kubelet[1920]: E0123 19:19:19.354364 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:20.358963 kubelet[1920]: E0123 19:19:20.357730 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:20.872244 containerd[1564]: time="2026-01-23T19:19:20.870971991Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:19:20.874128 containerd[1564]: time="2026-01-23T19:19:20.870842947Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 19:19:20.876838 containerd[1564]: time="2026-01-23T19:19:20.876342285Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:19:20.880862 containerd[1564]: time="2026-01-23T19:19:20.880542603Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 38.366570202s" Jan 23 19:19:20.880862 containerd[1564]: time="2026-01-23T19:19:20.880782944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 19:19:20.889667 containerd[1564]: time="2026-01-23T19:19:20.888583379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 19:19:20.899272 containerd[1564]: time="2026-01-23T19:19:20.899097217Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:19:20.950587 containerd[1564]: time="2026-01-23T19:19:20.950198567Z" level=info msg="Container e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:21.170074 containerd[1564]: time="2026-01-23T19:19:21.164596130Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\"" Jan 23 19:19:21.185257 containerd[1564]: time="2026-01-23T19:19:21.182917071Z" level=info msg="StartContainer for \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\"" Jan 23 19:19:21.196294 containerd[1564]: time="2026-01-23T19:19:21.195035676Z" level=info msg="connecting to shim e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" protocol=ttrpc version=3 Jan 23 19:19:21.365126 kubelet[1920]: E0123 19:19:21.359045 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:21.770258 systemd[1]: Started cri-containerd-e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf.scope - libcontainer container e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf. Jan 23 19:19:22.196655 containerd[1564]: time="2026-01-23T19:19:22.194774950Z" level=info msg="StartContainer for \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" returns successfully" Jan 23 19:19:22.249989 systemd[1]: cri-containerd-e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf.scope: Deactivated successfully. Jan 23 19:19:22.271631 containerd[1564]: time="2026-01-23T19:19:22.271105815Z" level=info msg="received container exit event container_id:\"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" id:\"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" pid:2123 exited_at:{seconds:1769195962 nanos:265196949}" Jan 23 19:19:22.369535 kubelet[1920]: E0123 19:19:22.365228 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:23.367272 kubelet[1920]: E0123 19:19:23.367042 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:23.386680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf-rootfs.mount: Deactivated successfully. Jan 23 19:19:24.344666 containerd[1564]: time="2026-01-23T19:19:24.343621649Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:19:24.367561 kubelet[1920]: E0123 19:19:24.367263 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:24.383695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47965940.mount: Deactivated successfully. Jan 23 19:19:24.395359 containerd[1564]: time="2026-01-23T19:19:24.391860150Z" level=info msg="Container d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:24.394332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284860463.mount: Deactivated successfully. Jan 23 19:19:24.442549 containerd[1564]: time="2026-01-23T19:19:24.441562116Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\"" Jan 23 19:19:24.443527 containerd[1564]: time="2026-01-23T19:19:24.443364275Z" level=info msg="StartContainer for \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\"" Jan 23 19:19:24.447189 containerd[1564]: time="2026-01-23T19:19:24.446878754Z" level=info msg="connecting to shim d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" protocol=ttrpc version=3 Jan 23 19:19:24.676028 systemd[1]: Started cri-containerd-d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc.scope - libcontainer container d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc. Jan 23 19:19:25.173974 containerd[1564]: time="2026-01-23T19:19:25.173920685Z" level=info msg="StartContainer for \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" returns successfully" Jan 23 19:19:25.238860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:19:25.239154 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:19:25.240083 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:19:25.245961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:19:25.249800 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:19:25.253887 systemd[1]: cri-containerd-d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc.scope: Deactivated successfully. Jan 23 19:19:25.259845 containerd[1564]: time="2026-01-23T19:19:25.259790148Z" level=info msg="received container exit event container_id:\"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" id:\"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" pid:2172 exited_at:{seconds:1769195965 nanos:256368677}" Jan 23 19:19:25.458988 kubelet[1920]: E0123 19:19:25.458951 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:25.579315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:19:26.330978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc-rootfs.mount: Deactivated successfully. Jan 23 19:19:26.544898 kubelet[1920]: E0123 19:19:26.533235 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:27.536130 kubelet[1920]: E0123 19:19:27.533905 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:27.582725 containerd[1564]: time="2026-01-23T19:19:27.582670713Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:19:27.655164 containerd[1564]: time="2026-01-23T19:19:27.654846179Z" level=info msg="Container 8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:27.675303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157345851.mount: Deactivated successfully. Jan 23 19:19:27.754281 containerd[1564]: time="2026-01-23T19:19:27.754086147Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\"" Jan 23 19:19:27.756188 containerd[1564]: time="2026-01-23T19:19:27.755901430Z" level=info msg="StartContainer for \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\"" Jan 23 19:19:27.760861 containerd[1564]: time="2026-01-23T19:19:27.760814733Z" level=info msg="connecting to shim 8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" protocol=ttrpc version=3 Jan 23 19:19:27.890143 systemd[1]: Started cri-containerd-8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab.scope - libcontainer container 8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab. Jan 23 19:19:28.097825 containerd[1564]: time="2026-01-23T19:19:28.097656651Z" level=info msg="StartContainer for \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" returns successfully" Jan 23 19:19:28.102068 systemd[1]: cri-containerd-8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab.scope: Deactivated successfully. Jan 23 19:19:28.110984 containerd[1564]: time="2026-01-23T19:19:28.110717290Z" level=info msg="received container exit event container_id:\"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" id:\"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" pid:2223 exited_at:{seconds:1769195968 nanos:107208847}" Jan 23 19:19:28.188277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab-rootfs.mount: Deactivated successfully. Jan 23 19:19:28.535298 kubelet[1920]: E0123 19:19:28.535211 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:28.600576 containerd[1564]: time="2026-01-23T19:19:28.600218794Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:19:28.639277 containerd[1564]: time="2026-01-23T19:19:28.639036088Z" level=info msg="Container 497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:28.677330 containerd[1564]: time="2026-01-23T19:19:28.677040610Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\"" Jan 23 19:19:28.678699 containerd[1564]: time="2026-01-23T19:19:28.678552870Z" level=info msg="StartContainer for \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\"" Jan 23 19:19:28.683129 containerd[1564]: time="2026-01-23T19:19:28.683067263Z" level=info msg="connecting to shim 497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" protocol=ttrpc version=3 Jan 23 19:19:28.800394 systemd[1]: Started cri-containerd-497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57.scope - libcontainer container 497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57. Jan 23 19:19:29.585376 kubelet[1920]: E0123 19:19:29.582235 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:30.539214 systemd[1]: cri-containerd-497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57.scope: Deactivated successfully. Jan 23 19:19:30.583046 containerd[1564]: time="2026-01-23T19:19:30.580741141Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice/cri-containerd-497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57.scope/memory.events\": no such file or directory" Jan 23 19:19:30.583815 kubelet[1920]: E0123 19:19:30.583305 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:30.591176 containerd[1564]: time="2026-01-23T19:19:30.591132158Z" level=info msg="received container exit event container_id:\"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" id:\"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" pid:2263 exited_at:{seconds:1769195970 nanos:577973110}" Jan 23 19:19:30.598983 containerd[1564]: time="2026-01-23T19:19:30.598780817Z" level=info msg="StartContainer for \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" returns successfully" Jan 23 19:19:30.816068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57-rootfs.mount: Deactivated successfully. Jan 23 19:19:30.955607 containerd[1564]: time="2026-01-23T19:19:30.953623877Z" level=error msg="collecting metrics for 497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57" error="ttrpc: closed" Jan 23 19:19:30.986063 containerd[1564]: time="2026-01-23T19:19:30.956396100Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799->@: write: broken pipe" runtime=io.containerd.runc.v2 Jan 23 19:19:31.627957 kubelet[1920]: E0123 19:19:31.627202 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:31.662726 containerd[1564]: time="2026-01-23T19:19:31.662672673Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:19:31.748686 containerd[1564]: time="2026-01-23T19:19:31.748642250Z" level=info msg="Container fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:31.822979 containerd[1564]: time="2026-01-23T19:19:31.822297005Z" level=info msg="CreateContainer within sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\"" Jan 23 19:19:31.830891 containerd[1564]: time="2026-01-23T19:19:31.830783641Z" level=info msg="StartContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\"" Jan 23 19:19:31.846280 containerd[1564]: time="2026-01-23T19:19:31.840357089Z" level=info msg="connecting to shim fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3" address="unix:///run/containerd/s/77d070313bd4716dfab63b03d3b2db6deae38c25e1711e2dc8bbaaceb8d85799" protocol=ttrpc version=3 Jan 23 19:19:32.192918 systemd[1]: Started cri-containerd-fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3.scope - libcontainer container fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3. Jan 23 19:19:32.802569 kubelet[1920]: E0123 19:19:32.792798 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:32.957265 containerd[1564]: time="2026-01-23T19:19:32.956784040Z" level=info msg="StartContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" returns successfully" Jan 23 19:19:33.080695 containerd[1564]: time="2026-01-23T19:19:33.079160435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:19:33.086354 containerd[1564]: time="2026-01-23T19:19:33.086066247Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 23 19:19:33.092569 containerd[1564]: time="2026-01-23T19:19:33.092055465Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:19:33.127551 containerd[1564]: time="2026-01-23T19:19:33.125136119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:19:33.133850 containerd[1564]: time="2026-01-23T19:19:33.133717102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 12.244877533s" Jan 23 19:19:33.134786 containerd[1564]: time="2026-01-23T19:19:33.133853182Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 19:19:33.195981 containerd[1564]: time="2026-01-23T19:19:33.194988735Z" level=info msg="CreateContainer within sandbox \"e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:19:33.254811 containerd[1564]: time="2026-01-23T19:19:33.254636699Z" level=info msg="Container e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:19:33.277965 containerd[1564]: time="2026-01-23T19:19:33.277856310Z" level=info msg="CreateContainer within sandbox \"e982ffa51f1971bdb15875adf8b37d3727a0c8d05fdd6334ba4e82ac48dfeea8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c\"" Jan 23 19:19:33.279707 containerd[1564]: time="2026-01-23T19:19:33.279624939Z" level=info msg="StartContainer for \"e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c\"" Jan 23 19:19:33.283890 containerd[1564]: time="2026-01-23T19:19:33.283083631Z" level=info msg="connecting to shim e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c" address="unix:///run/containerd/s/bd20e9b7331bdf04b19d4153619e3f4f4f81af8af93b9614cf72f0021045654e" protocol=ttrpc version=3 Jan 23 19:19:33.899723 kubelet[1920]: E0123 19:19:33.797948 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:34.312290 systemd[1]: Started cri-containerd-e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c.scope - libcontainer container e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c. Jan 23 19:19:34.753996 kubelet[1920]: I0123 19:19:34.753333 1920 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:19:34.876086 kubelet[1920]: E0123 19:19:34.846818 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:35.553246 containerd[1564]: time="2026-01-23T19:19:35.551764571Z" level=info msg="StartContainer for \"e90b2924ed090999d47ad79f30ce5c7c2ed741e4f5c95ffd646fb2ed4470b55c\" returns successfully" Jan 23 19:19:35.872391 kubelet[1920]: E0123 19:19:35.869088 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:35.933285 kubelet[1920]: I0123 19:19:35.932934 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zz7r4" podStartSLOduration=6.320403271 podStartE2EDuration="56.932627497s" podCreationTimestamp="2026-01-23 19:18:39 +0000 UTC" firstStartedPulling="2026-01-23 19:18:42.525807564 +0000 UTC m=+3.856436100" lastFinishedPulling="2026-01-23 19:19:33.138031788 +0000 UTC m=+54.468660326" observedRunningTime="2026-01-23 19:19:35.907006869 +0000 UTC m=+57.237635406" watchObservedRunningTime="2026-01-23 19:19:35.932627497 +0000 UTC m=+57.263256054" Jan 23 19:19:36.007739 kubelet[1920]: I0123 19:19:36.007314 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ck9tr" podStartSLOduration=18.633929029 podStartE2EDuration="57.007287795s" podCreationTimestamp="2026-01-23 19:18:39 +0000 UTC" firstStartedPulling="2026-01-23 19:18:42.512812351 +0000 UTC m=+3.843440888" lastFinishedPulling="2026-01-23 19:19:20.886171096 +0000 UTC m=+42.216799654" observedRunningTime="2026-01-23 19:19:36.006923745 +0000 UTC m=+57.337552312" watchObservedRunningTime="2026-01-23 19:19:36.007287795 +0000 UTC m=+57.337916332" Jan 23 19:19:36.871950 kubelet[1920]: E0123 19:19:36.871293 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:37.876746 kubelet[1920]: E0123 19:19:37.875650 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:38.880279 kubelet[1920]: E0123 19:19:38.879347 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:39.238755 kubelet[1920]: E0123 19:19:39.238051 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:39.767921 systemd[1]: Created slice kubepods-besteffort-pod93f6a3b2_9a16_4e7c_aa71_9b4e50f93794.slice - libcontainer container kubepods-besteffort-pod93f6a3b2_9a16_4e7c_aa71_9b4e50f93794.slice. Jan 23 19:19:39.795994 kubelet[1920]: I0123 19:19:39.795633 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b549\" (UniqueName: \"kubernetes.io/projected/93f6a3b2-9a16-4e7c-aa71-9b4e50f93794-kube-api-access-4b549\") pod \"nginx-deployment-7fcdb87857-j8h2c\" (UID: \"93f6a3b2-9a16-4e7c-aa71-9b4e50f93794\") " pod="default/nginx-deployment-7fcdb87857-j8h2c" Jan 23 19:19:39.885855 kubelet[1920]: E0123 19:19:39.884273 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:40.113198 containerd[1564]: time="2026-01-23T19:19:40.103603745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-j8h2c,Uid:93f6a3b2-9a16-4e7c-aa71-9b4e50f93794,Namespace:default,Attempt:0,}" Jan 23 19:19:40.890825 kubelet[1920]: E0123 19:19:40.889510 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:41.894323 kubelet[1920]: E0123 19:19:41.892656 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:42.912085 kubelet[1920]: E0123 19:19:42.904121 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:43.908844 kubelet[1920]: E0123 19:19:43.906094 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:44.914288 kubelet[1920]: E0123 19:19:44.913818 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:45.915945 kubelet[1920]: E0123 19:19:45.915171 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:46.931991 kubelet[1920]: E0123 19:19:46.916361 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:47.932561 kubelet[1920]: E0123 19:19:47.931943 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:48.937220 kubelet[1920]: E0123 19:19:48.935913 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:49.942070 kubelet[1920]: E0123 19:19:49.938642 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:50.942579 kubelet[1920]: E0123 19:19:50.940698 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:51.941111 kubelet[1920]: E0123 19:19:51.941014 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:52.943698 kubelet[1920]: E0123 19:19:52.943249 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:53.950025 kubelet[1920]: E0123 19:19:53.948548 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:54.950141 kubelet[1920]: E0123 19:19:54.949709 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:55.952315 kubelet[1920]: E0123 19:19:55.951210 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:56.955726 kubelet[1920]: E0123 19:19:56.955175 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:57.957161 kubelet[1920]: E0123 19:19:57.955697 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:58.957704 kubelet[1920]: E0123 19:19:58.956794 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:59.240189 kubelet[1920]: E0123 19:19:59.239708 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:19:59.961078 kubelet[1920]: E0123 19:19:59.958253 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:00.959761 kubelet[1920]: E0123 19:20:00.959061 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:01.961835 kubelet[1920]: E0123 19:20:01.961131 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:02.962285 kubelet[1920]: E0123 19:20:02.962206 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:03.964994 kubelet[1920]: E0123 19:20:03.963787 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:04.964625 kubelet[1920]: E0123 19:20:04.964279 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:05.967252 kubelet[1920]: E0123 19:20:05.967007 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:06.969523 kubelet[1920]: E0123 19:20:06.969194 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:07.971672 kubelet[1920]: E0123 19:20:07.970758 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:08.976178 kubelet[1920]: E0123 19:20:08.974199 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:09.976815 kubelet[1920]: E0123 19:20:09.976287 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:10.978071 kubelet[1920]: E0123 19:20:10.977002 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:11.775688 kernel: Initializing XFRM netlink socket Jan 23 19:20:11.982111 kubelet[1920]: E0123 19:20:11.982039 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:12.982657 kubelet[1920]: E0123 19:20:12.982575 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:13.828236 systemd-networkd[1500]: cilium_host: Link UP Jan 23 19:20:13.828613 systemd-networkd[1500]: cilium_net: Link UP Jan 23 19:20:13.829192 systemd-networkd[1500]: cilium_host: Gained carrier Jan 23 19:20:13.829593 systemd-networkd[1500]: cilium_net: Gained carrier Jan 23 19:20:13.984311 kubelet[1920]: E0123 19:20:13.984066 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:14.056743 systemd-networkd[1500]: cilium_host: Gained IPv6LL Jan 23 19:20:14.079396 systemd-networkd[1500]: cilium_vxlan: Link UP Jan 23 19:20:14.079735 systemd-networkd[1500]: cilium_vxlan: Gained carrier Jan 23 19:20:14.570232 systemd-networkd[1500]: cilium_net: Gained IPv6LL Jan 23 19:20:14.890594 kernel: NET: Registered PF_ALG protocol family Jan 23 19:20:14.986164 kubelet[1920]: E0123 19:20:14.985337 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:15.785389 systemd-networkd[1500]: cilium_vxlan: Gained IPv6LL Jan 23 19:20:15.991227 kubelet[1920]: E0123 19:20:15.990727 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:16.571761 systemd-networkd[1500]: lxc_health: Link UP Jan 23 19:20:16.593691 systemd-networkd[1500]: lxc_health: Gained carrier Jan 23 19:20:16.935388 containerd[1564]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Jan 23 19:20:16.940913 systemd[1]: run-netns-cni\x2d2b58246f\x2db956\x2d234d\x2d1757\x2d0140643a9053.mount: Deactivated successfully. Jan 23 19:20:16.978622 containerd[1564]: time="2026-01-23T19:20:16.977996184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-j8h2c,Uid:93f6a3b2-9a16-4e7c-aa71-9b4e50f93794,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a67a3ef5c0332a731d780728869c50f3c9ec3bf15dfbc9340bd63f081926023\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Jan 23 19:20:16.979229 kubelet[1920]: E0123 19:20:16.979061 1920 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 19:20:16.979229 kubelet[1920]: rpc error: code = Unknown desc = failed to setup network for sandbox "3a67a3ef5c0332a731d780728869c50f3c9ec3bf15dfbc9340bd63f081926023": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 23 19:20:16.979229 kubelet[1920]: Is the agent running? Jan 23 19:20:16.979229 kubelet[1920]: > Jan 23 19:20:16.979616 kubelet[1920]: E0123 19:20:16.979273 1920 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 19:20:16.979616 kubelet[1920]: rpc error: code = Unknown desc = failed to setup network for sandbox "3a67a3ef5c0332a731d780728869c50f3c9ec3bf15dfbc9340bd63f081926023": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 23 19:20:16.979616 kubelet[1920]: Is the agent running? Jan 23 19:20:16.979616 kubelet[1920]: > pod="default/nginx-deployment-7fcdb87857-j8h2c" Jan 23 19:20:16.979616 kubelet[1920]: E0123 19:20:16.979516 1920 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err=< Jan 23 19:20:16.979616 kubelet[1920]: rpc error: code = Unknown desc = failed to setup network for sandbox "3a67a3ef5c0332a731d780728869c50f3c9ec3bf15dfbc9340bd63f081926023": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Jan 23 19:20:16.979616 kubelet[1920]: Is the agent running? Jan 23 19:20:16.979616 kubelet[1920]: > pod="default/nginx-deployment-7fcdb87857-j8h2c" Jan 23 19:20:16.979931 kubelet[1920]: E0123 19:20:16.979691 1920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-j8h2c_default(93f6a3b2-9a16-4e7c-aa71-9b4e50f93794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-j8h2c_default(93f6a3b2-9a16-4e7c-aa71-9b4e50f93794)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a67a3ef5c0332a731d780728869c50f3c9ec3bf15dfbc9340bd63f081926023\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="default/nginx-deployment-7fcdb87857-j8h2c" podUID="93f6a3b2-9a16-4e7c-aa71-9b4e50f93794" Jan 23 19:20:16.991578 kubelet[1920]: E0123 19:20:16.991219 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:17.991982 kubelet[1920]: E0123 19:20:17.991900 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:18.089854 systemd-networkd[1500]: lxc_health: Gained IPv6LL Jan 23 19:20:18.993310 kubelet[1920]: E0123 19:20:18.992322 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:19.237062 kubelet[1920]: E0123 19:20:19.236836 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:19.993980 kubelet[1920]: E0123 19:20:19.993641 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:20.995299 kubelet[1920]: E0123 19:20:20.994912 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:21.997033 kubelet[1920]: E0123 19:20:21.996126 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:22.999092 kubelet[1920]: E0123 19:20:22.998141 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:24.003859 kubelet[1920]: E0123 19:20:24.003310 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:25.038729 kubelet[1920]: E0123 19:20:25.035784 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:26.038931 kubelet[1920]: E0123 19:20:26.038081 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:27.045127 kubelet[1920]: E0123 19:20:27.041173 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:28.058364 kubelet[1920]: E0123 19:20:28.056929 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:29.071106 kubelet[1920]: E0123 19:20:29.063815 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:29.584218 containerd[1564]: time="2026-01-23T19:20:29.572066623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-j8h2c,Uid:93f6a3b2-9a16-4e7c-aa71-9b4e50f93794,Namespace:default,Attempt:0,}" Jan 23 19:20:29.984710 systemd-networkd[1500]: lxcd57287fad8a0: Link UP Jan 23 19:20:30.044872 kernel: eth0: renamed from tmpdf55b Jan 23 19:20:30.062085 systemd-networkd[1500]: lxcd57287fad8a0: Gained carrier Jan 23 19:20:30.106745 kubelet[1920]: E0123 19:20:30.078890 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:31.096683 kubelet[1920]: E0123 19:20:31.095808 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:31.923397 systemd-networkd[1500]: lxcd57287fad8a0: Gained IPv6LL Jan 23 19:20:32.102879 kubelet[1920]: E0123 19:20:32.102735 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:32.159056 containerd[1564]: time="2026-01-23T19:20:32.158762906Z" level=info msg="connecting to shim df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d" address="unix:///run/containerd/s/a2cad62dd48faed4a0ccf6a697d0ab056eb11bd0246eda94c0d7a18fa6ee63c1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:20:32.290235 systemd[1]: Started cri-containerd-df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d.scope - libcontainer container df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d. Jan 23 19:20:32.360198 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:20:32.492766 containerd[1564]: time="2026-01-23T19:20:32.488387309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-j8h2c,Uid:93f6a3b2-9a16-4e7c-aa71-9b4e50f93794,Namespace:default,Attempt:0,} returns sandbox id \"df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d\"" Jan 23 19:20:32.494125 containerd[1564]: time="2026-01-23T19:20:32.493899604Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 19:20:33.109582 kubelet[1920]: E0123 19:20:33.109073 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:34.111019 kubelet[1920]: E0123 19:20:34.110316 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:35.111871 kubelet[1920]: E0123 19:20:35.111820 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:35.721853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548472250.mount: Deactivated successfully. Jan 23 19:20:36.113150 kubelet[1920]: E0123 19:20:36.112917 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:37.113927 kubelet[1920]: E0123 19:20:37.113717 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:38.115675 kubelet[1920]: E0123 19:20:38.115617 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:38.844377 containerd[1564]: time="2026-01-23T19:20:38.844208579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:20:38.847683 containerd[1564]: time="2026-01-23T19:20:38.847259224Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 19:20:38.850051 containerd[1564]: time="2026-01-23T19:20:38.849798675Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:20:38.854820 containerd[1564]: time="2026-01-23T19:20:38.854642689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:20:38.856189 containerd[1564]: time="2026-01-23T19:20:38.856091714Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 6.36209216s" Jan 23 19:20:38.856189 containerd[1564]: time="2026-01-23T19:20:38.856140989Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 19:20:38.861590 containerd[1564]: time="2026-01-23T19:20:38.861096585Z" level=info msg="CreateContainer within sandbox \"df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 19:20:38.892552 containerd[1564]: time="2026-01-23T19:20:38.891694295Z" level=info msg="Container c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:20:38.911541 containerd[1564]: time="2026-01-23T19:20:38.910313561Z" level=info msg="CreateContainer within sandbox \"df55b831ab7938a59987f14484cea4b2a2b82c3e786d59a9a66c97579dbcfa3d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee\"" Jan 23 19:20:38.913560 containerd[1564]: time="2026-01-23T19:20:38.913312632Z" level=info msg="StartContainer for \"c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee\"" Jan 23 19:20:38.915747 containerd[1564]: time="2026-01-23T19:20:38.915709750Z" level=info msg="connecting to shim c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee" address="unix:///run/containerd/s/a2cad62dd48faed4a0ccf6a697d0ab056eb11bd0246eda94c0d7a18fa6ee63c1" protocol=ttrpc version=3 Jan 23 19:20:38.980805 systemd[1]: Started cri-containerd-c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee.scope - libcontainer container c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee. Jan 23 19:20:39.068268 containerd[1564]: time="2026-01-23T19:20:39.068166056Z" level=info msg="StartContainer for \"c38a61b5618b3d6c85031b8ba48e0e18472e846b7cec9e792f8bfb4f728b78ee\" returns successfully" Jan 23 19:20:39.117609 kubelet[1920]: E0123 19:20:39.117330 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:39.238114 kubelet[1920]: E0123 19:20:39.237857 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:39.494665 kubelet[1920]: I0123 19:20:39.493777 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-j8h2c" podStartSLOduration=54.127283975 podStartE2EDuration="1m0.493755455s" podCreationTimestamp="2026-01-23 19:19:39 +0000 UTC" firstStartedPulling="2026-01-23 19:20:32.491379314 +0000 UTC m=+113.822007851" lastFinishedPulling="2026-01-23 19:20:38.857850795 +0000 UTC m=+120.188479331" observedRunningTime="2026-01-23 19:20:39.493710384 +0000 UTC m=+120.824338921" watchObservedRunningTime="2026-01-23 19:20:39.493755455 +0000 UTC m=+120.824383992" Jan 23 19:20:40.121033 kubelet[1920]: E0123 19:20:40.119730 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:41.122818 kubelet[1920]: E0123 19:20:41.122229 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:42.123030 kubelet[1920]: E0123 19:20:42.122796 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:43.124027 kubelet[1920]: E0123 19:20:43.123837 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:44.124517 kubelet[1920]: E0123 19:20:44.124289 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:45.125930 kubelet[1920]: E0123 19:20:45.125743 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:46.127214 kubelet[1920]: E0123 19:20:46.126972 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:47.150918 kubelet[1920]: E0123 19:20:47.142719 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:47.645108 systemd[1]: Created slice kubepods-besteffort-poda5946326_63c3_4eca_ab80_cffc9271f0b0.slice - libcontainer container kubepods-besteffort-poda5946326_63c3_4eca_ab80_cffc9271f0b0.slice. Jan 23 19:20:47.676740 kubelet[1920]: I0123 19:20:47.674311 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a5946326-63c3-4eca-ab80-cffc9271f0b0-data\") pod \"nfs-server-provisioner-0\" (UID: \"a5946326-63c3-4eca-ab80-cffc9271f0b0\") " pod="default/nfs-server-provisioner-0" Jan 23 19:20:47.676740 kubelet[1920]: I0123 19:20:47.675933 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4vj\" (UniqueName: \"kubernetes.io/projected/a5946326-63c3-4eca-ab80-cffc9271f0b0-kube-api-access-zh4vj\") pod \"nfs-server-provisioner-0\" (UID: \"a5946326-63c3-4eca-ab80-cffc9271f0b0\") " pod="default/nfs-server-provisioner-0" Jan 23 19:20:47.991869 containerd[1564]: time="2026-01-23T19:20:47.990221840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a5946326-63c3-4eca-ab80-cffc9271f0b0,Namespace:default,Attempt:0,}" Jan 23 19:20:48.170944 kubelet[1920]: E0123 19:20:48.165838 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:48.889160 systemd-networkd[1500]: lxcc3e5e2693d28: Link UP Jan 23 19:20:48.898068 kernel: eth0: renamed from tmp5e5b8 Jan 23 19:20:48.904729 systemd-networkd[1500]: lxcc3e5e2693d28: Gained carrier Jan 23 19:20:49.231313 kubelet[1920]: E0123 19:20:49.230631 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:50.352025 kubelet[1920]: E0123 19:20:50.350343 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:50.875025 containerd[1564]: time="2026-01-23T19:20:50.872086208Z" level=info msg="connecting to shim 5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd" address="unix:///run/containerd/s/0a6584593963ca1d907f2f210a3224a3bb00c0ef98bc631854fa464d7a02fede" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:20:51.132951 systemd[1]: Started cri-containerd-5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd.scope - libcontainer container 5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd. Jan 23 19:20:51.363750 kubelet[1920]: E0123 19:20:51.360635 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:51.365970 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:20:51.377896 systemd-networkd[1500]: lxcc3e5e2693d28: Gained IPv6LL Jan 23 19:20:51.924072 containerd[1564]: time="2026-01-23T19:20:51.919857882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a5946326-63c3-4eca-ab80-cffc9271f0b0,Namespace:default,Attempt:0,} returns sandbox id \"5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd\"" Jan 23 19:20:52.020093 containerd[1564]: time="2026-01-23T19:20:52.019218087Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 19:20:52.370906 kubelet[1920]: E0123 19:20:52.368949 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:53.372055 kubelet[1920]: E0123 19:20:53.371302 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:54.374678 kubelet[1920]: E0123 19:20:54.373923 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:55.375394 kubelet[1920]: E0123 19:20:55.374564 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:56.379742 kubelet[1920]: E0123 19:20:56.377344 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:57.448755 kubelet[1920]: E0123 19:20:57.384818 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:58.390531 kubelet[1920]: E0123 19:20:58.388638 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:59.238804 kubelet[1920]: E0123 19:20:59.237300 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:59.390526 kubelet[1920]: E0123 19:20:59.390188 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:20:59.537657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730559829.mount: Deactivated successfully. Jan 23 19:21:00.395128 kubelet[1920]: E0123 19:21:00.394677 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:01.399584 kubelet[1920]: E0123 19:21:01.398907 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:02.400266 kubelet[1920]: E0123 19:21:02.399908 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:03.402020 kubelet[1920]: E0123 19:21:03.401252 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:04.404562 kubelet[1920]: E0123 19:21:04.403851 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:05.411061 kubelet[1920]: E0123 19:21:05.405387 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:06.413168 kubelet[1920]: E0123 19:21:06.413078 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:07.416290 kubelet[1920]: E0123 19:21:07.416088 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:08.417269 kubelet[1920]: E0123 19:21:08.416591 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:09.418507 kubelet[1920]: E0123 19:21:09.417694 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:09.447555 containerd[1564]: time="2026-01-23T19:21:09.445545718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:09.448954 containerd[1564]: time="2026-01-23T19:21:09.448877124Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 23 19:21:09.455846 containerd[1564]: time="2026-01-23T19:21:09.455580745Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:09.463268 containerd[1564]: time="2026-01-23T19:21:09.463088575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:09.464979 containerd[1564]: time="2026-01-23T19:21:09.464931418Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 17.44520164s" Jan 23 19:21:09.465252 containerd[1564]: time="2026-01-23T19:21:09.465221265Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 19:21:09.480366 containerd[1564]: time="2026-01-23T19:21:09.480017504Z" level=info msg="CreateContainer within sandbox \"5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 19:21:09.519957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404000151.mount: Deactivated successfully. Jan 23 19:21:09.530848 containerd[1564]: time="2026-01-23T19:21:09.530261309Z" level=info msg="Container 41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:09.598130 containerd[1564]: time="2026-01-23T19:21:09.594693606Z" level=info msg="CreateContainer within sandbox \"5e5b814f7969250d1ed5d1f3644df9ee590b87c37b011ab3159bc3b45abc65fd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114\"" Jan 23 19:21:09.601332 containerd[1564]: time="2026-01-23T19:21:09.601150238Z" level=info msg="StartContainer for \"41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114\"" Jan 23 19:21:09.604155 containerd[1564]: time="2026-01-23T19:21:09.603386591Z" level=info msg="connecting to shim 41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114" address="unix:///run/containerd/s/0a6584593963ca1d907f2f210a3224a3bb00c0ef98bc631854fa464d7a02fede" protocol=ttrpc version=3 Jan 23 19:21:09.788260 systemd[1]: Started cri-containerd-41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114.scope - libcontainer container 41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114. Jan 23 19:21:09.961250 containerd[1564]: time="2026-01-23T19:21:09.960870375Z" level=info msg="StartContainer for \"41c564b73004384c01a2e650a70b23b61aead96ca5b4cba1be03d47ff55a8114\" returns successfully" Jan 23 19:21:10.418700 kubelet[1920]: E0123 19:21:10.418640 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:11.419107 kubelet[1920]: E0123 19:21:11.419050 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:12.420247 kubelet[1920]: E0123 19:21:12.419666 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:13.433019 kubelet[1920]: E0123 19:21:13.430808 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:14.438859 kubelet[1920]: E0123 19:21:14.438700 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:15.441305 kubelet[1920]: E0123 19:21:15.439975 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:16.443769 kubelet[1920]: E0123 19:21:16.443687 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:17.450112 kubelet[1920]: E0123 19:21:17.449887 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:18.450940 kubelet[1920]: E0123 19:21:18.450388 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:19.241321 kubelet[1920]: E0123 19:21:19.240680 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:19.453084 kubelet[1920]: E0123 19:21:19.451957 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:20.463473 kubelet[1920]: E0123 19:21:20.461801 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:20.469161 kubelet[1920]: I0123 19:21:20.466930 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=15.972245445 podStartE2EDuration="33.466907258s" podCreationTimestamp="2026-01-23 19:20:47 +0000 UTC" firstStartedPulling="2026-01-23 19:20:51.977021636 +0000 UTC m=+133.307650172" lastFinishedPulling="2026-01-23 19:21:09.471683448 +0000 UTC m=+150.802311985" observedRunningTime="2026-01-23 19:21:10.158787635 +0000 UTC m=+151.489416193" watchObservedRunningTime="2026-01-23 19:21:20.466907258 +0000 UTC m=+161.797535805" Jan 23 19:21:20.509041 systemd[1]: Created slice kubepods-besteffort-pod5082de4f_0c3f_4498_895b_69bcf0384cc2.slice - libcontainer container kubepods-besteffort-pod5082de4f_0c3f_4498_895b_69bcf0384cc2.slice. Jan 23 19:21:20.595274 kubelet[1920]: I0123 19:21:20.591707 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-94a574bb-c41f-4853-983a-6e75289af11f\" (UniqueName: \"kubernetes.io/nfs/5082de4f-0c3f-4498-895b-69bcf0384cc2-pvc-94a574bb-c41f-4853-983a-6e75289af11f\") pod \"test-pod-1\" (UID: \"5082de4f-0c3f-4498-895b-69bcf0384cc2\") " pod="default/test-pod-1" Jan 23 19:21:20.595274 kubelet[1920]: I0123 19:21:20.591775 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdsr\" (UniqueName: \"kubernetes.io/projected/5082de4f-0c3f-4498-895b-69bcf0384cc2-kube-api-access-sxdsr\") pod \"test-pod-1\" (UID: \"5082de4f-0c3f-4498-895b-69bcf0384cc2\") " pod="default/test-pod-1" Jan 23 19:21:21.455803 kernel: netfs: FS-Cache loaded Jan 23 19:21:21.467314 kubelet[1920]: E0123 19:21:21.465957 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:22.006578 kernel: RPC: Registered named UNIX socket transport module. Jan 23 19:21:22.006833 kernel: RPC: Registered udp transport module. Jan 23 19:21:22.006880 kernel: RPC: Registered tcp transport module. Jan 23 19:21:22.020823 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 19:21:22.020959 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 19:21:22.469120 kubelet[1920]: E0123 19:21:22.469055 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:23.331050 kernel: NFS: Registering the id_resolver key type Jan 23 19:21:23.331191 kernel: Key type id_resolver registered Jan 23 19:21:23.331222 kernel: Key type id_legacy registered Jan 23 19:21:23.470229 kubelet[1920]: E0123 19:21:23.470143 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:23.678661 nfsidmap[3334]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 19:21:23.688366 nfsidmap[3334]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 19:21:23.739042 nfsidmap[3337]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 19:21:23.739566 nfsidmap[3337]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 19:21:23.809674 nfsrahead[3341]: setting /var/lib/kubelet/pods/5082de4f-0c3f-4498-895b-69bcf0384cc2/volumes/kubernetes.io~nfs/pvc-94a574bb-c41f-4853-983a-6e75289af11f readahead to 128 Jan 23 19:21:23.829338 containerd[1564]: time="2026-01-23T19:21:23.828146614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5082de4f-0c3f-4498-895b-69bcf0384cc2,Namespace:default,Attempt:0,}" Jan 23 19:21:23.986138 systemd-networkd[1500]: lxcd66a21afb274: Link UP Jan 23 19:21:24.006023 kernel: eth0: renamed from tmpadba5 Jan 23 19:21:24.010742 systemd-networkd[1500]: lxcd66a21afb274: Gained carrier Jan 23 19:21:24.474552 kubelet[1920]: E0123 19:21:24.474322 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:25.029302 containerd[1564]: time="2026-01-23T19:21:25.029194118Z" level=info msg="connecting to shim adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52" address="unix:///run/containerd/s/13b98c52669d2342c3b7f9c958fab47892bf15d40210ad478628916ab1a01b42" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:21:25.266311 systemd[1]: Started cri-containerd-adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52.scope - libcontainer container adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52. Jan 23 19:21:25.356391 systemd-resolved[1402]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 19:21:25.475397 kubelet[1920]: E0123 19:21:25.475350 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:25.478620 containerd[1564]: time="2026-01-23T19:21:25.478104837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5082de4f-0c3f-4498-895b-69bcf0384cc2,Namespace:default,Attempt:0,} returns sandbox id \"adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52\"" Jan 23 19:21:25.481797 containerd[1564]: time="2026-01-23T19:21:25.481350323Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 19:21:25.609808 systemd-networkd[1500]: lxcd66a21afb274: Gained IPv6LL Jan 23 19:21:25.707563 containerd[1564]: time="2026-01-23T19:21:25.703193669Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:25.707563 containerd[1564]: time="2026-01-23T19:21:25.705091610Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 19:21:25.713236 containerd[1564]: time="2026-01-23T19:21:25.712930744Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 231.187603ms" Jan 23 19:21:25.713236 containerd[1564]: time="2026-01-23T19:21:25.713149937Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 19:21:25.722811 containerd[1564]: time="2026-01-23T19:21:25.720924518Z" level=info msg="CreateContainer within sandbox \"adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 19:21:25.753082 containerd[1564]: time="2026-01-23T19:21:25.753025319Z" level=info msg="Container c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:25.780119 containerd[1564]: time="2026-01-23T19:21:25.779821211Z" level=info msg="CreateContainer within sandbox \"adba566db6c6cf26fb5754e2666ee425406c1274d05b4d48e610de7c4b380b52\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86\"" Jan 23 19:21:25.785293 containerd[1564]: time="2026-01-23T19:21:25.785091601Z" level=info msg="StartContainer for \"c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86\"" Jan 23 19:21:25.789897 containerd[1564]: time="2026-01-23T19:21:25.789714010Z" level=info msg="connecting to shim c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86" address="unix:///run/containerd/s/13b98c52669d2342c3b7f9c958fab47892bf15d40210ad478628916ab1a01b42" protocol=ttrpc version=3 Jan 23 19:21:25.856268 systemd[1]: Started cri-containerd-c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86.scope - libcontainer container c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86. Jan 23 19:21:25.977875 containerd[1564]: time="2026-01-23T19:21:25.977756578Z" level=info msg="StartContainer for \"c5b772e2b56a2c18a894d74e1ac8dc79f38bea533549204cf4c0e6516771ec86\" returns successfully" Jan 23 19:21:26.479722 kubelet[1920]: E0123 19:21:26.478882 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:27.483045 kubelet[1920]: E0123 19:21:27.482300 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:28.484336 kubelet[1920]: E0123 19:21:28.483117 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:29.486069 kubelet[1920]: E0123 19:21:29.485842 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:30.487563 kubelet[1920]: E0123 19:21:30.487246 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:31.488040 kubelet[1920]: E0123 19:21:31.487758 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:32.488649 kubelet[1920]: E0123 19:21:32.488160 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:33.490360 kubelet[1920]: E0123 19:21:33.489693 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:34.494234 kubelet[1920]: E0123 19:21:34.492844 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:34.498582 kubelet[1920]: I0123 19:21:34.498201 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=46.263201272 podStartE2EDuration="46.498182358s" podCreationTimestamp="2026-01-23 19:20:48 +0000 UTC" firstStartedPulling="2026-01-23 19:21:25.48039414 +0000 UTC m=+166.811022677" lastFinishedPulling="2026-01-23 19:21:25.715375227 +0000 UTC m=+167.046003763" observedRunningTime="2026-01-23 19:21:26.427848144 +0000 UTC m=+167.758476682" watchObservedRunningTime="2026-01-23 19:21:34.498182358 +0000 UTC m=+175.828810896" Jan 23 19:21:34.616517 containerd[1564]: time="2026-01-23T19:21:34.616299559Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:21:34.644745 containerd[1564]: time="2026-01-23T19:21:34.644276035Z" level=info msg="StopContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" with timeout 2 (s)" Jan 23 19:21:34.649292 containerd[1564]: time="2026-01-23T19:21:34.649122806Z" level=info msg="Stop container \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" with signal terminated" Jan 23 19:21:34.690090 systemd-networkd[1500]: lxc_health: Link DOWN Jan 23 19:21:34.690941 systemd-networkd[1500]: lxc_health: Lost carrier Jan 23 19:21:34.729059 systemd[1]: cri-containerd-fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3.scope: Deactivated successfully. Jan 23 19:21:34.729719 systemd[1]: cri-containerd-fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3.scope: Consumed 20.666s CPU time, 131.8M memory peak, 244K read from disk, 13.3M written to disk. Jan 23 19:21:34.736558 containerd[1564]: time="2026-01-23T19:21:34.736064375Z" level=info msg="received container exit event container_id:\"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" id:\"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" pid:2301 exited_at:{seconds:1769196094 nanos:735043594}" Jan 23 19:21:34.833371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3-rootfs.mount: Deactivated successfully. Jan 23 19:21:34.929798 containerd[1564]: time="2026-01-23T19:21:34.929193760Z" level=info msg="StopContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" returns successfully" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.935941449Z" level=info msg="StopPodSandbox for \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\"" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.936283360Z" level=info msg="Container to stop \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.936307716Z" level=info msg="Container to stop \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.936330751Z" level=info msg="Container to stop \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.936343757Z" level=info msg="Container to stop \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:21:34.936598 containerd[1564]: time="2026-01-23T19:21:34.936360007Z" level=info msg="Container to stop \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 19:21:34.990368 systemd[1]: cri-containerd-6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0.scope: Deactivated successfully. Jan 23 19:21:35.007590 containerd[1564]: time="2026-01-23T19:21:35.007313523Z" level=info msg="received sandbox exit event container_id:\"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" id:\"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" exit_status:137 exited_at:{seconds:1769196095 nanos:4834432}" monitor_name=podsandbox Jan 23 19:21:35.158371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0-rootfs.mount: Deactivated successfully. Jan 23 19:21:35.176981 containerd[1564]: time="2026-01-23T19:21:35.176576199Z" level=info msg="shim disconnected" id=6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0 namespace=k8s.io Jan 23 19:21:35.176981 containerd[1564]: time="2026-01-23T19:21:35.176773790Z" level=warning msg="cleaning up after shim disconnected" id=6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0 namespace=k8s.io Jan 23 19:21:35.176981 containerd[1564]: time="2026-01-23T19:21:35.176876900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 19:21:35.222580 containerd[1564]: time="2026-01-23T19:21:35.222322099Z" level=info msg="received sandbox container exit event sandbox_id:\"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" exit_status:137 exited_at:{seconds:1769196095 nanos:4834432}" monitor_name=criService Jan 23 19:21:35.225892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0-shm.mount: Deactivated successfully. Jan 23 19:21:35.227311 containerd[1564]: time="2026-01-23T19:21:35.227199405Z" level=info msg="TearDown network for sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" successfully" Jan 23 19:21:35.227311 containerd[1564]: time="2026-01-23T19:21:35.227305380Z" level=info msg="StopPodSandbox for \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" returns successfully" Jan 23 19:21:35.306541 kubelet[1920]: I0123 19:21:35.306327 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-net\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306783 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-kernel\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306825 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-hostproc\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306862 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclfb\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-kube-api-access-cclfb\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306887 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-lib-modules\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306914 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-hubble-tls\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.306945 kubelet[1920]: I0123 19:21:35.306945 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a6e129-a524-4227-9d26-57b0e408224e-clustermesh-secrets\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307608 kubelet[1920]: I0123 19:21:35.306972 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cni-path\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307608 kubelet[1920]: I0123 19:21:35.306887 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.307608 kubelet[1920]: I0123 19:21:35.306994 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-etc-cni-netd\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307608 kubelet[1920]: I0123 19:21:35.307136 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.307608 kubelet[1920]: I0123 19:21:35.307194 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a6e129-a524-4227-9d26-57b0e408224e-cilium-config-path\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307226 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-xtables-lock\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307380 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-run\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307586 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-bpf-maps\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307612 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-cgroup\") pod \"10a6e129-a524-4227-9d26-57b0e408224e\" (UID: \"10a6e129-a524-4227-9d26-57b0e408224e\") " Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307761 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-net\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.307883 kubelet[1920]: I0123 19:21:35.307781 1920 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-etc-cni-netd\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.309887 kubelet[1920]: I0123 19:21:35.307195 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.309887 kubelet[1920]: I0123 19:21:35.307217 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-hostproc" (OuterVolumeSpecName: "hostproc") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.309887 kubelet[1920]: I0123 19:21:35.307820 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.309887 kubelet[1920]: I0123 19:21:35.308764 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.309887 kubelet[1920]: I0123 19:21:35.308982 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.310253 kubelet[1920]: I0123 19:21:35.309015 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.313823 kubelet[1920]: I0123 19:21:35.313759 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.313823 kubelet[1920]: I0123 19:21:35.313773 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cni-path" (OuterVolumeSpecName: "cni-path") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 19:21:35.327019 systemd[1]: var-lib-kubelet-pods-10a6e129\x2da524\x2d4227\x2d9d26\x2d57b0e408224e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 19:21:35.329878 kubelet[1920]: I0123 19:21:35.328996 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-kube-api-access-cclfb" (OuterVolumeSpecName: "kube-api-access-cclfb") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "kube-api-access-cclfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:21:35.332609 kubelet[1920]: I0123 19:21:35.330289 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:21:35.332609 kubelet[1920]: I0123 19:21:35.331055 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a6e129-a524-4227-9d26-57b0e408224e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:21:35.332297 systemd[1]: var-lib-kubelet-pods-10a6e129\x2da524\x2d4227\x2d9d26\x2d57b0e408224e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcclfb.mount: Deactivated successfully. Jan 23 19:21:35.333163 kubelet[1920]: I0123 19:21:35.333015 1920 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a6e129-a524-4227-9d26-57b0e408224e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10a6e129-a524-4227-9d26-57b0e408224e" (UID: "10a6e129-a524-4227-9d26-57b0e408224e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:21:35.333219 systemd[1]: var-lib-kubelet-pods-10a6e129\x2da524\x2d4227\x2d9d26\x2d57b0e408224e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408339 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a6e129-a524-4227-9d26-57b0e408224e-cilium-config-path\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408386 1920 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cni-path\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408621 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-cgroup\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408647 1920 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-xtables-lock\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408750 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-cilium-run\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408768 1920 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-bpf-maps\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408780 1920 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cclfb\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-kube-api-access-cclfb\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.408808 kubelet[1920]: I0123 19:21:35.408791 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-host-proc-sys-kernel\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.409774 kubelet[1920]: I0123 19:21:35.408803 1920 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-hostproc\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.409774 kubelet[1920]: I0123 19:21:35.408815 1920 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a6e129-a524-4227-9d26-57b0e408224e-lib-modules\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.409774 kubelet[1920]: I0123 19:21:35.408827 1920 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a6e129-a524-4227-9d26-57b0e408224e-hubble-tls\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.409774 kubelet[1920]: I0123 19:21:35.408840 1920 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a6e129-a524-4227-9d26-57b0e408224e-clustermesh-secrets\") on node \"10.0.0.101\" DevicePath \"\"" Jan 23 19:21:35.440610 systemd[1]: Removed slice kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice - libcontainer container kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice. Jan 23 19:21:35.441544 systemd[1]: kubepods-burstable-pod10a6e129_a524_4227_9d26_57b0e408224e.slice: Consumed 21.436s CPU time, 132.1M memory peak, 244K read from disk, 13.3M written to disk. Jan 23 19:21:35.477877 kubelet[1920]: I0123 19:21:35.477384 1920 scope.go:117] "RemoveContainer" containerID="fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3" Jan 23 19:21:35.486782 containerd[1564]: time="2026-01-23T19:21:35.485614827Z" level=info msg="RemoveContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\"" Jan 23 19:21:35.493561 kubelet[1920]: E0123 19:21:35.493216 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:35.501297 containerd[1564]: time="2026-01-23T19:21:35.501027023Z" level=info msg="RemoveContainer for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" returns successfully" Jan 23 19:21:35.501645 kubelet[1920]: I0123 19:21:35.501576 1920 scope.go:117] "RemoveContainer" containerID="497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57" Jan 23 19:21:35.506660 containerd[1564]: time="2026-01-23T19:21:35.506609781Z" level=info msg="RemoveContainer for \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\"" Jan 23 19:21:35.521659 containerd[1564]: time="2026-01-23T19:21:35.521349366Z" level=info msg="RemoveContainer for \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" returns successfully" Jan 23 19:21:35.522892 kubelet[1920]: I0123 19:21:35.521939 1920 scope.go:117] "RemoveContainer" containerID="8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab" Jan 23 19:21:35.528876 containerd[1564]: time="2026-01-23T19:21:35.527549444Z" level=info msg="RemoveContainer for \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\"" Jan 23 19:21:35.537938 containerd[1564]: time="2026-01-23T19:21:35.537904876Z" level=info msg="RemoveContainer for \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" returns successfully" Jan 23 19:21:35.539977 kubelet[1920]: I0123 19:21:35.539771 1920 scope.go:117] "RemoveContainer" containerID="d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc" Jan 23 19:21:35.550289 containerd[1564]: time="2026-01-23T19:21:35.549306912Z" level=info msg="RemoveContainer for \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\"" Jan 23 19:21:35.562517 containerd[1564]: time="2026-01-23T19:21:35.562320789Z" level=info msg="RemoveContainer for \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" returns successfully" Jan 23 19:21:35.564393 kubelet[1920]: I0123 19:21:35.563575 1920 scope.go:117] "RemoveContainer" containerID="e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf" Jan 23 19:21:35.568138 containerd[1564]: time="2026-01-23T19:21:35.568003321Z" level=info msg="RemoveContainer for \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\"" Jan 23 19:21:35.579525 containerd[1564]: time="2026-01-23T19:21:35.579305865Z" level=info msg="RemoveContainer for \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" returns successfully" Jan 23 19:21:35.580314 kubelet[1920]: I0123 19:21:35.580288 1920 scope.go:117] "RemoveContainer" containerID="fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3" Jan 23 19:21:35.581237 containerd[1564]: time="2026-01-23T19:21:35.581142566Z" level=error msg="ContainerStatus for \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\": not found" Jan 23 19:21:35.581642 kubelet[1920]: E0123 19:21:35.581614 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\": not found" containerID="fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3" Jan 23 19:21:35.582916 kubelet[1920]: I0123 19:21:35.582159 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3"} err="failed to get container status \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd043be33e75f91293e4dd7df52bd0c4b7c3bc2865625c1cd777842dc44ca5c3\": not found" Jan 23 19:21:35.582916 kubelet[1920]: I0123 19:21:35.582787 1920 scope.go:117] "RemoveContainer" containerID="497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57" Jan 23 19:21:35.583284 containerd[1564]: time="2026-01-23T19:21:35.583235332Z" level=error msg="ContainerStatus for \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\": not found" Jan 23 19:21:35.584898 kubelet[1920]: E0123 19:21:35.584542 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\": not found" containerID="497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57" Jan 23 19:21:35.584898 kubelet[1920]: I0123 19:21:35.584645 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57"} err="failed to get container status \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\": rpc error: code = NotFound desc = an error occurred when try to find container \"497032e6a25f07baee8b90cd6801b7a0c7235a2ae361412f8e0f292ac79aac57\": not found" Jan 23 19:21:35.584898 kubelet[1920]: I0123 19:21:35.584667 1920 scope.go:117] "RemoveContainer" containerID="8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab" Jan 23 19:21:35.585375 containerd[1564]: time="2026-01-23T19:21:35.585236010Z" level=error msg="ContainerStatus for \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\": not found" Jan 23 19:21:35.587355 kubelet[1920]: E0123 19:21:35.586936 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\": not found" containerID="8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab" Jan 23 19:21:35.587355 kubelet[1920]: I0123 19:21:35.586995 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab"} err="failed to get container status \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f98131618db994d997e7cdbb515e0f3e4eb9b2d6ae62f1e78172d171dc48fab\": not found" Jan 23 19:21:35.587355 kubelet[1920]: I0123 19:21:35.587033 1920 scope.go:117] "RemoveContainer" containerID="d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc" Jan 23 19:21:35.588979 containerd[1564]: time="2026-01-23T19:21:35.587623827Z" level=error msg="ContainerStatus for \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\": not found" Jan 23 19:21:35.588979 containerd[1564]: time="2026-01-23T19:21:35.588622728Z" level=error msg="ContainerStatus for \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\": not found" Jan 23 19:21:35.589210 kubelet[1920]: E0123 19:21:35.588125 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\": not found" containerID="d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc" Jan 23 19:21:35.589210 kubelet[1920]: I0123 19:21:35.588155 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc"} err="failed to get container status \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d74d067b6ee4e845aee2a5be22d617d2fccbf9c73618ba16e5dcb89214365cbc\": not found" Jan 23 19:21:35.589210 kubelet[1920]: I0123 19:21:35.588175 1920 scope.go:117] "RemoveContainer" containerID="e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf" Jan 23 19:21:35.589210 kubelet[1920]: E0123 19:21:35.589022 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\": not found" containerID="e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf" Jan 23 19:21:35.589210 kubelet[1920]: I0123 19:21:35.589051 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf"} err="failed to get container status \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8e0e99d387e62fdabc5b0682c3d8bd3a63d51d4407e18800019aa0f1b298faf\": not found" Jan 23 19:21:36.444550 update_engine[1544]: I20260123 19:21:36.443612 1544 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 19:21:36.446199 update_engine[1544]: I20260123 19:21:36.444674 1544 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 19:21:36.446713 update_engine[1544]: I20260123 19:21:36.446605 1544 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 19:21:36.451003 update_engine[1544]: I20260123 19:21:36.449028 1544 omaha_request_params.cc:62] Current group set to stable Jan 23 19:21:36.456065 update_engine[1544]: I20260123 19:21:36.455112 1544 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 19:21:36.456065 update_engine[1544]: I20260123 19:21:36.455210 1544 update_attempter.cc:643] Scheduling an action processor start. Jan 23 19:21:36.456065 update_engine[1544]: I20260123 19:21:36.455240 1544 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:21:36.460278 update_engine[1544]: I20260123 19:21:36.457353 1544 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 19:21:36.460278 update_engine[1544]: I20260123 19:21:36.458856 1544 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:21:36.461134 update_engine[1544]: I20260123 19:21:36.461023 1544 omaha_request_action.cc:272] Request: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461134 update_engine[1544]: Jan 23 19:21:36.461654 update_engine[1544]: I20260123 19:21:36.461263 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:21:36.476515 update_engine[1544]: I20260123 19:21:36.476315 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:21:36.482886 update_engine[1544]: I20260123 19:21:36.482678 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:21:36.483905 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 19:21:36.494045 kubelet[1920]: E0123 19:21:36.493855 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:36.501182 update_engine[1544]: E20260123 19:21:36.500627 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:21:36.501381 update_engine[1544]: I20260123 19:21:36.501281 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 19:21:37.070734 kubelet[1920]: E0123 19:21:37.070294 1920 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:21:37.430665 kubelet[1920]: I0123 19:21:37.430350 1920 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a6e129-a524-4227-9d26-57b0e408224e" path="/var/lib/kubelet/pods/10a6e129-a524-4227-9d26-57b0e408224e/volumes" Jan 23 19:21:37.496766 kubelet[1920]: E0123 19:21:37.496645 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:38.498143 kubelet[1920]: E0123 19:21:38.497694 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:38.606675 kubelet[1920]: I0123 19:21:38.603030 1920 memory_manager.go:355] "RemoveStaleState removing state" podUID="10a6e129-a524-4227-9d26-57b0e408224e" containerName="cilium-agent" Jan 23 19:21:38.645003 kubelet[1920]: I0123 19:21:38.644169 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-host-proc-sys-net\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.645003 kubelet[1920]: I0123 19:21:38.644308 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-hubble-tls\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.645003 kubelet[1920]: I0123 19:21:38.644829 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-cilium-run\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.645003 kubelet[1920]: I0123 19:21:38.644859 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-cilium-cgroup\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.645003 kubelet[1920]: I0123 19:21:38.644890 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-lib-modules\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.645272 kubelet[1920]: I0123 19:21:38.645013 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-xtables-lock\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646245 kubelet[1920]: I0123 19:21:38.645065 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s465\" (UniqueName: \"kubernetes.io/projected/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-kube-api-access-2s465\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646245 kubelet[1920]: I0123 19:21:38.645636 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-cni-path\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646245 kubelet[1920]: I0123 19:21:38.645663 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-etc-cni-netd\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646245 kubelet[1920]: I0123 19:21:38.645687 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-cilium-config-path\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646245 kubelet[1920]: I0123 19:21:38.645723 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-cilium-ipsec-secrets\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.647277 kubelet[1920]: I0123 19:21:38.645748 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5smt\" (UniqueName: \"kubernetes.io/projected/89a12768-dddd-402d-ab9d-ee99885b9adc-kube-api-access-v5smt\") pod \"cilium-operator-6c4d7847fc-lfbwp\" (UID: \"89a12768-dddd-402d-ab9d-ee99885b9adc\") " pod="kube-system/cilium-operator-6c4d7847fc-lfbwp" Jan 23 19:21:38.647277 kubelet[1920]: I0123 19:21:38.645777 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-bpf-maps\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.647277 kubelet[1920]: I0123 19:21:38.645799 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-clustermesh-secrets\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.647277 kubelet[1920]: I0123 19:21:38.645830 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89a12768-dddd-402d-ab9d-ee99885b9adc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lfbwp\" (UID: \"89a12768-dddd-402d-ab9d-ee99885b9adc\") " pod="kube-system/cilium-operator-6c4d7847fc-lfbwp" Jan 23 19:21:38.647277 kubelet[1920]: I0123 19:21:38.645850 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-hostproc\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.646814 systemd[1]: Created slice kubepods-besteffort-pod89a12768_dddd_402d_ab9d_ee99885b9adc.slice - libcontainer container kubepods-besteffort-pod89a12768_dddd_402d_ab9d_ee99885b9adc.slice. Jan 23 19:21:38.649204 kubelet[1920]: I0123 19:21:38.645873 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c87d325-d7cb-4d2a-9e04-1b578ebca0a3-host-proc-sys-kernel\") pod \"cilium-pqb7q\" (UID: \"0c87d325-d7cb-4d2a-9e04-1b578ebca0a3\") " pod="kube-system/cilium-pqb7q" Jan 23 19:21:38.670907 systemd[1]: Created slice kubepods-burstable-pod0c87d325_d7cb_4d2a_9e04_1b578ebca0a3.slice - libcontainer container kubepods-burstable-pod0c87d325_d7cb_4d2a_9e04_1b578ebca0a3.slice. Jan 23 19:21:38.963147 kubelet[1920]: E0123 19:21:38.962305 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:38.968026 containerd[1564]: time="2026-01-23T19:21:38.966831603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lfbwp,Uid:89a12768-dddd-402d-ab9d-ee99885b9adc,Namespace:kube-system,Attempt:0,}" Jan 23 19:21:39.010677 kubelet[1920]: E0123 19:21:39.005857 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:39.011651 containerd[1564]: time="2026-01-23T19:21:39.011606114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqb7q,Uid:0c87d325-d7cb-4d2a-9e04-1b578ebca0a3,Namespace:kube-system,Attempt:0,}" Jan 23 19:21:39.079836 containerd[1564]: time="2026-01-23T19:21:39.078816541Z" level=info msg="connecting to shim 16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d" address="unix:///run/containerd/s/903db69f9c6b4f20de4743f6e0bacdc239979af36a22110b8a23bac4f26e87b6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:21:39.177108 containerd[1564]: time="2026-01-23T19:21:39.170761395Z" level=info msg="connecting to shim 370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:21:39.204709 systemd[1]: Started cri-containerd-16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d.scope - libcontainer container 16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d. Jan 23 19:21:39.236941 kubelet[1920]: E0123 19:21:39.236804 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:39.293161 systemd[1]: Started cri-containerd-370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea.scope - libcontainer container 370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea. Jan 23 19:21:39.323091 containerd[1564]: time="2026-01-23T19:21:39.322805985Z" level=info msg="StopPodSandbox for \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\"" Jan 23 19:21:39.331641 containerd[1564]: time="2026-01-23T19:21:39.323391396Z" level=info msg="TearDown network for sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" successfully" Jan 23 19:21:39.331641 containerd[1564]: time="2026-01-23T19:21:39.330746241Z" level=info msg="StopPodSandbox for \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" returns successfully" Jan 23 19:21:39.338730 containerd[1564]: time="2026-01-23T19:21:39.337948330Z" level=info msg="RemovePodSandbox for \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\"" Jan 23 19:21:39.344579 containerd[1564]: time="2026-01-23T19:21:39.342376590Z" level=info msg="Forcibly stopping sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\"" Jan 23 19:21:39.344579 containerd[1564]: time="2026-01-23T19:21:39.344141192Z" level=info msg="TearDown network for sandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" successfully" Jan 23 19:21:39.349346 containerd[1564]: time="2026-01-23T19:21:39.349184657Z" level=info msg="Ensure that sandbox 6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0 in task-service has been cleanup successfully" Jan 23 19:21:39.362243 containerd[1564]: time="2026-01-23T19:21:39.362088824Z" level=info msg="RemovePodSandbox \"6c3db959be5a1bdcd7643e32f3abd0917975e12ed443dcda6e7b538525984ce0\" returns successfully" Jan 23 19:21:39.449808 containerd[1564]: time="2026-01-23T19:21:39.449743178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lfbwp,Uid:89a12768-dddd-402d-ab9d-ee99885b9adc,Namespace:kube-system,Attempt:0,} returns sandbox id \"16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d\"" Jan 23 19:21:39.454552 containerd[1564]: time="2026-01-23T19:21:39.452910614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqb7q,Uid:0c87d325-d7cb-4d2a-9e04-1b578ebca0a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\"" Jan 23 19:21:39.456346 kubelet[1920]: E0123 19:21:39.455137 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:39.460285 kubelet[1920]: E0123 19:21:39.460157 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:39.460780 containerd[1564]: time="2026-01-23T19:21:39.460625575Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 19:21:39.467286 containerd[1564]: time="2026-01-23T19:21:39.466860632Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 19:21:39.500702 kubelet[1920]: E0123 19:21:39.497922 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:39.501678 containerd[1564]: time="2026-01-23T19:21:39.501627039Z" level=info msg="Container da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:39.542176 containerd[1564]: time="2026-01-23T19:21:39.541568768Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4\"" Jan 23 19:21:39.542614 containerd[1564]: time="2026-01-23T19:21:39.542390598Z" level=info msg="StartContainer for \"da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4\"" Jan 23 19:21:39.545248 containerd[1564]: time="2026-01-23T19:21:39.544863681Z" level=info msg="connecting to shim da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" protocol=ttrpc version=3 Jan 23 19:21:39.617189 systemd[1]: Started cri-containerd-da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4.scope - libcontainer container da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4. Jan 23 19:21:39.711855 containerd[1564]: time="2026-01-23T19:21:39.711760838Z" level=info msg="StartContainer for \"da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4\" returns successfully" Jan 23 19:21:39.781663 systemd[1]: cri-containerd-da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4.scope: Deactivated successfully. Jan 23 19:21:39.790261 containerd[1564]: time="2026-01-23T19:21:39.790107163Z" level=info msg="received container exit event container_id:\"da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4\" id:\"da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4\" pid:3661 exited_at:{seconds:1769196099 nanos:788599968}" Jan 23 19:21:39.874542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8816609f0b40b41db2d69f0991991705e049341ed27a9cec65869a16afd1f4-rootfs.mount: Deactivated successfully. Jan 23 19:21:40.159374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052503178.mount: Deactivated successfully. Jan 23 19:21:40.499304 kubelet[1920]: E0123 19:21:40.499228 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:40.537792 kubelet[1920]: E0123 19:21:40.537749 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:40.547581 containerd[1564]: time="2026-01-23T19:21:40.545862094Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 19:21:40.588772 containerd[1564]: time="2026-01-23T19:21:40.588642465Z" level=info msg="Container 640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:40.617355 containerd[1564]: time="2026-01-23T19:21:40.616936035Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c\"" Jan 23 19:21:40.620993 containerd[1564]: time="2026-01-23T19:21:40.620829270Z" level=info msg="StartContainer for \"640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c\"" Jan 23 19:21:40.624244 containerd[1564]: time="2026-01-23T19:21:40.623039986Z" level=info msg="connecting to shim 640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" protocol=ttrpc version=3 Jan 23 19:21:40.697193 systemd[1]: Started cri-containerd-640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c.scope - libcontainer container 640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c. Jan 23 19:21:40.779772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870175781.mount: Deactivated successfully. Jan 23 19:21:40.816958 containerd[1564]: time="2026-01-23T19:21:40.816914795Z" level=info msg="StartContainer for \"640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c\" returns successfully" Jan 23 19:21:40.890831 systemd[1]: cri-containerd-640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c.scope: Deactivated successfully. Jan 23 19:21:40.897945 containerd[1564]: time="2026-01-23T19:21:40.897701509Z" level=info msg="received container exit event container_id:\"640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c\" id:\"640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c\" pid:3719 exited_at:{seconds:1769196100 nanos:896081854}" Jan 23 19:21:40.986694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-640503d6991408550db8205d750fc8c307018bd01a4756b1a4abd48b95909d7c-rootfs.mount: Deactivated successfully. Jan 23 19:21:41.499964 kubelet[1920]: E0123 19:21:41.499895 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:41.558124 kubelet[1920]: E0123 19:21:41.557883 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:41.565237 containerd[1564]: time="2026-01-23T19:21:41.564761406Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 19:21:41.608355 containerd[1564]: time="2026-01-23T19:21:41.607929059Z" level=info msg="Container 4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:41.634935 containerd[1564]: time="2026-01-23T19:21:41.634779907Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b\"" Jan 23 19:21:41.639679 containerd[1564]: time="2026-01-23T19:21:41.639127804Z" level=info msg="StartContainer for \"4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b\"" Jan 23 19:21:41.646235 containerd[1564]: time="2026-01-23T19:21:41.645920043Z" level=info msg="connecting to shim 4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" protocol=ttrpc version=3 Jan 23 19:21:41.724065 systemd[1]: Started cri-containerd-4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b.scope - libcontainer container 4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b. Jan 23 19:21:41.933916 systemd[1]: cri-containerd-4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b.scope: Deactivated successfully. Jan 23 19:21:41.939051 containerd[1564]: time="2026-01-23T19:21:41.935944374Z" level=info msg="StartContainer for \"4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b\" returns successfully" Jan 23 19:21:41.942889 containerd[1564]: time="2026-01-23T19:21:41.942798506Z" level=info msg="received container exit event container_id:\"4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b\" id:\"4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b\" pid:3766 exited_at:{seconds:1769196101 nanos:941085809}" Jan 23 19:21:42.046959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e4e8022795c369bb680cbc8d088ea7fe42c90d6f26a2703da8acc38a50cc54b-rootfs.mount: Deactivated successfully. Jan 23 19:21:42.074932 kubelet[1920]: E0123 19:21:42.074889 1920 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 19:21:42.306103 containerd[1564]: time="2026-01-23T19:21:42.304778012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:42.310172 containerd[1564]: time="2026-01-23T19:21:42.309670038Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 19:21:42.312382 containerd[1564]: time="2026-01-23T19:21:42.312113542Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:21:42.315289 containerd[1564]: time="2026-01-23T19:21:42.314878090Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.854151962s" Jan 23 19:21:42.315289 containerd[1564]: time="2026-01-23T19:21:42.314991410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 19:21:42.320748 containerd[1564]: time="2026-01-23T19:21:42.320383952Z" level=info msg="CreateContainer within sandbox \"16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 19:21:42.349715 containerd[1564]: time="2026-01-23T19:21:42.349665610Z" level=info msg="Container 43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:42.373041 containerd[1564]: time="2026-01-23T19:21:42.372903452Z" level=info msg="CreateContainer within sandbox \"16a7f8c7dd8fd9dd114a015ad35a92d620300ea4c70bbb5481f7c0ef8574b43d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a\"" Jan 23 19:21:42.375641 containerd[1564]: time="2026-01-23T19:21:42.375346152Z" level=info msg="StartContainer for \"43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a\"" Jan 23 19:21:42.377574 containerd[1564]: time="2026-01-23T19:21:42.377315552Z" level=info msg="connecting to shim 43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a" address="unix:///run/containerd/s/903db69f9c6b4f20de4743f6e0bacdc239979af36a22110b8a23bac4f26e87b6" protocol=ttrpc version=3 Jan 23 19:21:42.425041 systemd[1]: Started cri-containerd-43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a.scope - libcontainer container 43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a. Jan 23 19:21:42.503806 kubelet[1920]: E0123 19:21:42.503705 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:42.675611 containerd[1564]: time="2026-01-23T19:21:42.672907801Z" level=info msg="StartContainer for \"43c9716f7749057c1ae9329e9dde6f3487978ad1357837726a35804e25673c3a\" returns successfully" Jan 23 19:21:42.676215 kubelet[1920]: E0123 19:21:42.674050 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:42.700565 containerd[1564]: time="2026-01-23T19:21:42.699921598Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 19:21:42.765716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178314274.mount: Deactivated successfully. Jan 23 19:21:42.810563 containerd[1564]: time="2026-01-23T19:21:42.810110065Z" level=info msg="Container cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:42.861094 containerd[1564]: time="2026-01-23T19:21:42.860974834Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49\"" Jan 23 19:21:42.865909 containerd[1564]: time="2026-01-23T19:21:42.864803810Z" level=info msg="StartContainer for \"cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49\"" Jan 23 19:21:42.874034 containerd[1564]: time="2026-01-23T19:21:42.873723020Z" level=info msg="connecting to shim cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" protocol=ttrpc version=3 Jan 23 19:21:42.985894 systemd[1]: Started cri-containerd-cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49.scope - libcontainer container cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49. Jan 23 19:21:43.209674 systemd[1]: cri-containerd-cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49.scope: Deactivated successfully. Jan 23 19:21:43.222877 containerd[1564]: time="2026-01-23T19:21:43.220930334Z" level=info msg="received container exit event container_id:\"cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49\" id:\"cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49\" pid:3843 exited_at:{seconds:1769196103 nanos:217801366}" Jan 23 19:21:43.222877 containerd[1564]: time="2026-01-23T19:21:43.221238883Z" level=info msg="StartContainer for \"cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49\" returns successfully" Jan 23 19:21:43.337201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc2505549c2522b6c51fbc92b310811046d62bbe2f2047b14b6297f8ef0efa49-rootfs.mount: Deactivated successfully. Jan 23 19:21:43.505060 kubelet[1920]: E0123 19:21:43.504668 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:43.743086 kubelet[1920]: E0123 19:21:43.742624 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:43.753705 kubelet[1920]: E0123 19:21:43.752718 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:43.755060 containerd[1564]: time="2026-01-23T19:21:43.754832828Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 19:21:43.850908 containerd[1564]: time="2026-01-23T19:21:43.846151058Z" level=info msg="Container 8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:21:43.906030 containerd[1564]: time="2026-01-23T19:21:43.890915418Z" level=info msg="CreateContainer within sandbox \"370af6544031ca53792879b135216cac0055814588dc640b4ec0f5a2778cbaea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f\"" Jan 23 19:21:43.919113 containerd[1564]: time="2026-01-23T19:21:43.911737559Z" level=info msg="StartContainer for \"8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f\"" Jan 23 19:21:43.919113 containerd[1564]: time="2026-01-23T19:21:43.913951152Z" level=info msg="connecting to shim 8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f" address="unix:///run/containerd/s/2b9b95330d6938519f474b22be5cdaa1653f5477a85c5a86d2368921f13b190f" protocol=ttrpc version=3 Jan 23 19:21:44.016938 systemd[1]: Started cri-containerd-8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f.scope - libcontainer container 8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f. Jan 23 19:21:44.047164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379575729.mount: Deactivated successfully. Jan 23 19:21:44.265662 containerd[1564]: time="2026-01-23T19:21:44.262748082Z" level=info msg="StartContainer for \"8a64124d3c9eb43ad74d6a0530b8e3625d7671fd8e6c1062f607c98a9059879f\" returns successfully" Jan 23 19:21:44.506989 kubelet[1920]: E0123 19:21:44.506803 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:44.788993 kubelet[1920]: E0123 19:21:44.788778 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:45.077353 kubelet[1920]: I0123 19:21:45.077031 1920 setters.go:602] "Node became not ready" node="10.0.0.101" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T19:21:45Z","lastTransitionTime":"2026-01-23T19:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 19:21:45.504317 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 19:21:45.508365 kubelet[1920]: E0123 19:21:45.507773 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:45.801204 kubelet[1920]: E0123 19:21:45.800979 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:45.900709 kubelet[1920]: I0123 19:21:45.897985 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pqb7q" podStartSLOduration=7.897786204 podStartE2EDuration="7.897786204s" podCreationTimestamp="2026-01-23 19:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:21:45.895669482 +0000 UTC m=+187.226298038" watchObservedRunningTime="2026-01-23 19:21:45.897786204 +0000 UTC m=+187.228414761" Jan 23 19:21:45.900709 kubelet[1920]: I0123 19:21:45.899798 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lfbwp" podStartSLOduration=5.041467875 podStartE2EDuration="7.899779865s" podCreationTimestamp="2026-01-23 19:21:38 +0000 UTC" firstStartedPulling="2026-01-23 19:21:39.45928519 +0000 UTC m=+180.789913727" lastFinishedPulling="2026-01-23 19:21:42.31759718 +0000 UTC m=+183.648225717" observedRunningTime="2026-01-23 19:21:44.019255201 +0000 UTC m=+185.349883748" watchObservedRunningTime="2026-01-23 19:21:45.899779865 +0000 UTC m=+187.230408402" Jan 23 19:21:46.441757 update_engine[1544]: I20260123 19:21:46.441341 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:21:46.444178 update_engine[1544]: I20260123 19:21:46.442921 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:21:46.446319 update_engine[1544]: I20260123 19:21:46.445189 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:21:46.464292 update_engine[1544]: E20260123 19:21:46.463879 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:21:46.464292 update_engine[1544]: I20260123 19:21:46.464202 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 19:21:46.509855 kubelet[1920]: E0123 19:21:46.509605 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:46.803778 kubelet[1920]: E0123 19:21:46.803738 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:47.513633 kubelet[1920]: E0123 19:21:47.513379 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:47.810291 kubelet[1920]: E0123 19:21:47.809919 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:48.515392 kubelet[1920]: E0123 19:21:48.514803 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:49.516275 kubelet[1920]: E0123 19:21:49.515999 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:50.517183 kubelet[1920]: E0123 19:21:50.517030 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:51.518637 kubelet[1920]: E0123 19:21:51.518575 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:51.561619 systemd-networkd[1500]: lxc_health: Link UP Jan 23 19:21:51.565361 systemd-networkd[1500]: lxc_health: Gained carrier Jan 23 19:21:52.521369 kubelet[1920]: E0123 19:21:52.520859 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:53.015803 kubelet[1920]: E0123 19:21:53.015762 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:53.321261 systemd-networkd[1500]: lxc_health: Gained IPv6LL Jan 23 19:21:53.523927 kubelet[1920]: E0123 19:21:53.523835 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:53.841501 kubelet[1920]: E0123 19:21:53.841137 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:54.525275 kubelet[1920]: E0123 19:21:54.525192 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:54.851288 kubelet[1920]: E0123 19:21:54.849694 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 19:21:55.526516 kubelet[1920]: E0123 19:21:55.526131 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:56.450981 update_engine[1544]: I20260123 19:21:56.447711 1544 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:21:56.450981 update_engine[1544]: I20260123 19:21:56.448698 1544 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:21:56.450981 update_engine[1544]: I20260123 19:21:56.450920 1544 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:21:56.469192 update_engine[1544]: E20260123 19:21:56.468919 1544 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:21:56.469192 update_engine[1544]: I20260123 19:21:56.469133 1544 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 19:21:56.526768 kubelet[1920]: E0123 19:21:56.526570 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:57.529896 kubelet[1920]: E0123 19:21:57.529843 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:58.541638 kubelet[1920]: E0123 19:21:58.541574 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:59.242093 kubelet[1920]: E0123 19:21:59.242017 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:21:59.545799 kubelet[1920]: E0123 19:21:59.545636 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:22:00.549555 kubelet[1920]: E0123 19:22:00.548679 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:22:01.550156 kubelet[1920]: E0123 19:22:01.549788 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:22:02.553764 kubelet[1920]: E0123 19:22:02.550565 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:22:03.551170 kubelet[1920]: E0123 19:22:03.551095 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"