Jan 24 00:44:57.837413 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:44:57.837450 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:57.837469 kernel: BIOS-provided physical RAM map: Jan 24 00:44:57.837480 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:44:57.837489 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 24 00:44:57.837496 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 24 00:44:57.837509 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 24 00:44:57.837519 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 24 00:44:57.837527 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 24 00:44:57.837536 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 24 00:44:57.837550 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 24 00:44:57.837559 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 24 00:44:57.837568 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 24 00:44:57.837577 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 24 00:44:57.837588 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 24 00:44:57.837597 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 24 00:44:57.837612 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 24 00:44:57.837622 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 24 00:44:57.837631 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 24 00:44:57.837641 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:44:57.837651 kernel: NX (Execute Disable) protection: active Jan 24 00:44:57.837661 kernel: APIC: Static calls initialized Jan 24 00:44:57.837670 kernel: efi: EFI v2.7 by EDK II Jan 24 00:44:57.837680 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 24 00:44:57.837690 kernel: SMBIOS 2.8 present. Jan 24 00:44:57.837700 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 24 00:44:57.837709 kernel: Hypervisor detected: KVM Jan 24 00:44:57.837724 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:44:57.837734 kernel: kvm-clock: using sched offset of 8721074628 cycles Jan 24 00:44:57.837746 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:44:57.837755 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:44:57.837766 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:44:57.837836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:44:57.837846 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 24 00:44:57.837856 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:44:57.837866 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:44:57.837882 kernel: Using GB pages for direct mapping Jan 24 00:44:57.837893 kernel: Secure boot disabled Jan 24 00:44:57.837903 kernel: ACPI: Early table checksum verification disabled Jan 24 00:44:57.837912 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 24 00:44:57.837928 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 24 00:44:57.837939 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.837951 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.837965 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 24 00:44:57.837977 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.837987 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.837999 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.838009 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:44:57.838020 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 24 00:44:57.838030 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 24 00:44:57.838045 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 24 00:44:57.838055 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 24 00:44:57.838065 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 24 00:44:57.838076 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 24 00:44:57.838086 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 24 00:44:57.838096 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 24 00:44:57.838190 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 24 00:44:57.838204 kernel: No NUMA configuration found Jan 24 00:44:57.838215 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 24 00:44:57.838230 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 24 00:44:57.838240 kernel: Zone ranges: Jan 24 00:44:57.838250 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:44:57.838259 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 24 00:44:57.838269 kernel: Normal empty Jan 24 00:44:57.838279 kernel: Movable zone start for each node Jan 24 00:44:57.838288 kernel: Early memory node ranges Jan 24 00:44:57.838298 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:44:57.838308 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 24 00:44:57.838317 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 24 00:44:57.838330 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 24 00:44:57.838341 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 24 00:44:57.838350 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 24 00:44:57.838360 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 24 00:44:57.838371 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:44:57.838383 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:44:57.838392 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 24 00:44:57.838402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:44:57.838411 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 24 00:44:57.838425 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:44:57.838435 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 24 00:44:57.838481 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:44:57.838491 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:44:57.838501 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:44:57.838511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:44:57.838521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:44:57.838531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:44:57.838541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:44:57.838554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:44:57.838564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:44:57.838574 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:44:57.838583 kernel: TSC deadline timer available Jan 24 00:44:57.838593 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:44:57.838602 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:44:57.838612 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:44:57.838622 kernel: kvm-guest: setup PV sched yield Jan 24 00:44:57.838633 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 24 00:44:57.838649 kernel: Booting paravirtualized kernel on KVM Jan 24 00:44:57.838660 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:44:57.838672 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:44:57.838684 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:44:57.838693 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:44:57.838703 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:44:57.838712 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:44:57.838722 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:44:57.838734 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:57.838748 kernel: random: crng init done Jan 24 00:44:57.838758 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:44:57.838812 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:44:57.838823 kernel: Fallback order for Node 0: 0 Jan 24 00:44:57.838833 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 24 00:44:57.838843 kernel: Policy zone: DMA32 Jan 24 00:44:57.838852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:44:57.838863 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 24 00:44:57.838877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:44:57.838886 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:44:57.838896 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:44:57.841592 kernel: Dynamic Preempt: voluntary Jan 24 00:44:57.841604 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:44:57.841630 kernel: rcu: RCU event tracing is enabled. Jan 24 00:44:57.841644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:44:57.841655 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:44:57.841665 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:44:57.841676 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:44:57.841686 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:44:57.841696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:44:57.841710 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:44:57.841720 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:44:57.841730 kernel: Console: colour dummy device 80x25 Jan 24 00:44:57.841740 kernel: printk: console [ttyS0] enabled Jan 24 00:44:57.841750 kernel: ACPI: Core revision 20230628 Jan 24 00:44:57.841764 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:44:57.841822 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:44:57.841833 kernel: x2apic enabled Jan 24 00:44:57.841843 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:44:57.841854 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:44:57.841864 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:44:57.841875 kernel: kvm-guest: setup PV IPIs Jan 24 00:44:57.841885 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:44:57.841895 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:44:57.841912 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:44:57.841922 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:44:57.841933 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:44:57.841943 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:44:57.841953 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:44:57.841964 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:44:57.841974 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:44:57.841984 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:44:57.841995 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:44:57.842010 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:44:57.842020 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:44:57.842031 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:44:57.842041 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:44:57.842051 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:44:57.842061 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:44:57.842072 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:44:57.842082 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:44:57.842095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:44:57.842169 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:44:57.842177 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:44:57.842184 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:44:57.842191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:44:57.842198 kernel: landlock: Up and running. Jan 24 00:44:57.842204 kernel: SELinux: Initializing. Jan 24 00:44:57.842211 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:44:57.842218 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:44:57.842228 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:44:57.842235 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:44:57.842241 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:44:57.842249 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:44:57.842256 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:44:57.842262 kernel: signal: max sigframe size: 1776 Jan 24 00:44:57.842269 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:44:57.842276 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:44:57.842283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:44:57.842292 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:44:57.842298 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:44:57.842305 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:44:57.842311 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:44:57.842318 kernel: smpboot: Max logical packages: 1 Jan 24 00:44:57.842325 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:44:57.842331 kernel: devtmpfs: initialized Jan 24 00:44:57.842338 kernel: x86/mm: Memory block size: 128MB Jan 24 00:44:57.842345 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 24 00:44:57.842354 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 24 00:44:57.842360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 24 00:44:57.842371 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 24 00:44:57.842384 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 24 00:44:57.842397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:44:57.842407 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:44:57.842418 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:44:57.842431 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:44:57.842441 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:44:57.842457 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:44:57.842470 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:44:57.842479 kernel: audit: type=2000 audit(1769215492.659:1): state=initialized audit_enabled=0 res=1 Jan 24 00:44:57.842486 kernel: cpuidle: using governor menu Jan 24 00:44:57.842492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:44:57.842499 kernel: dca service started, version 1.12.1 Jan 24 00:44:57.842506 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:44:57.842513 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:44:57.842519 kernel: PCI: Using configuration type 1 for base access Jan 24 00:44:57.842529 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:44:57.842536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:44:57.842543 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:44:57.842549 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:44:57.842556 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:44:57.842563 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:44:57.842569 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:44:57.842576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:44:57.842582 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:44:57.842591 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:44:57.842598 kernel: ACPI: Interpreter enabled Jan 24 00:44:57.842606 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:44:57.842618 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:44:57.842630 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:44:57.842642 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:44:57.842649 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:44:57.842655 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:44:57.843600 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:44:57.843925 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:44:57.844215 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:44:57.844238 kernel: PCI host bridge to bus 0000:00 Jan 24 00:44:57.846639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:44:57.847388 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:44:57.847577 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:44:57.847865 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:44:57.848048 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:44:57.848331 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 24 00:44:57.848515 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:44:57.848876 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:44:57.849287 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:44:57.849492 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 24 00:44:57.849686 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 24 00:44:57.849932 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:44:57.850212 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 24 00:44:57.850416 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:44:57.850757 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:44:57.851386 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 24 00:44:57.851595 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 24 00:44:57.851848 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 24 00:44:57.852217 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:44:57.852421 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 24 00:44:57.852613 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 24 00:44:57.852901 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 24 00:44:57.853304 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:44:57.853500 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 24 00:44:57.853710 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 24 00:44:57.853950 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 24 00:44:57.854209 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 24 00:44:57.854483 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:44:57.854657 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:44:57.854953 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:44:57.855211 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 24 00:44:57.855397 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 24 00:44:57.855705 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:44:57.855948 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 24 00:44:57.855968 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:44:57.855980 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:44:57.855991 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:44:57.856010 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:44:57.856021 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:44:57.856032 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:44:57.856044 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:44:57.856055 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:44:57.856067 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:44:57.856079 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:44:57.856090 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:44:57.856187 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:44:57.856207 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:44:57.856219 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:44:57.856230 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:44:57.856241 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:44:57.856252 kernel: iommu: Default domain type: Translated Jan 24 00:44:57.856264 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:44:57.856276 kernel: efivars: Registered efivars operations Jan 24 00:44:57.856287 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:44:57.856298 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:44:57.856314 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 24 00:44:57.856325 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 24 00:44:57.856336 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 24 00:44:57.856348 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 24 00:44:57.856542 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:44:57.856909 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:44:57.857303 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:44:57.857324 kernel: vgaarb: loaded Jan 24 00:44:57.857336 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:44:57.857355 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:44:57.857368 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:44:57.857378 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:44:57.857390 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:44:57.857403 kernel: pnp: PnP ACPI init Jan 24 00:44:57.858680 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:44:57.858703 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:44:57.858717 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:44:57.858735 kernel: NET: Registered PF_INET protocol family Jan 24 00:44:57.858747 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:44:57.858759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:44:57.858817 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:44:57.858825 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:44:57.858831 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:44:57.858838 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:44:57.858845 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:44:57.858852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:44:57.858863 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:44:57.858870 kernel: NET: Registered PF_XDP protocol family Jan 24 00:44:57.859009 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 24 00:44:57.859200 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 24 00:44:57.859556 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:44:57.859681 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:44:57.859846 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:44:57.859959 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:44:57.860363 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:44:57.860545 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 24 00:44:57.860564 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:44:57.860576 kernel: Initialise system trusted keyrings Jan 24 00:44:57.860589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:44:57.860600 kernel: Key type asymmetric registered Jan 24 00:44:57.860613 kernel: Asymmetric key parser 'x509' registered Jan 24 00:44:57.860623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:44:57.860642 kernel: io scheduler mq-deadline registered Jan 24 00:44:57.860654 kernel: io scheduler kyber registered Jan 24 00:44:57.860665 kernel: io scheduler bfq registered Jan 24 00:44:57.860676 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:44:57.860690 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:44:57.860702 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:44:57.860713 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:44:57.860725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:44:57.860737 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:44:57.860755 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:44:57.860765 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:44:57.860831 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:44:57.860843 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:44:57.861551 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:44:57.861752 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:44:57.861993 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:44:56 UTC (1769215496) Jan 24 00:44:57.862450 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:44:57.862477 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:44:57.862490 kernel: efifb: probing for efifb Jan 24 00:44:57.862503 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 24 00:44:57.862513 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 24 00:44:57.862526 kernel: efifb: scrolling: redraw Jan 24 00:44:57.862538 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 24 00:44:57.862550 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:44:57.862560 kernel: fb0: EFI VGA frame buffer device Jan 24 00:44:57.862573 kernel: pstore: Using crash dump compression: deflate Jan 24 00:44:57.862589 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:44:57.862602 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:44:57.862612 kernel: Segment Routing with IPv6 Jan 24 00:44:57.862625 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:44:57.862636 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:44:57.862648 kernel: Key type dns_resolver registered Jan 24 00:44:57.862659 kernel: IPI shorthand broadcast: enabled Jan 24 00:44:57.862701 kernel: sched_clock: Marking stable (2173041141, 700930443)->(3958305823, -1084334239) Jan 24 00:44:57.862718 kernel: registered taskstats version 1 Jan 24 00:44:57.862734 kernel: Loading compiled-in X.509 certificates Jan 24 00:44:57.862747 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:44:57.862758 kernel: Key type .fscrypt registered Jan 24 00:44:57.862826 kernel: Key type fscrypt-provisioning registered Jan 24 00:44:57.862841 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:44:57.862853 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:44:57.862866 kernel: ima: No architecture policies found Jan 24 00:44:57.862876 kernel: clk: Disabling unused clocks Jan 24 00:44:57.862889 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:44:57.862906 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:44:57.862919 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:44:57.862929 kernel: Run /init as init process Jan 24 00:44:57.862943 kernel: with arguments: Jan 24 00:44:57.862955 kernel: /init Jan 24 00:44:57.862967 kernel: with environment: Jan 24 00:44:57.862978 kernel: HOME=/ Jan 24 00:44:57.862990 kernel: TERM=linux Jan 24 00:44:57.863006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:44:57.863024 systemd[1]: Detected virtualization kvm. Jan 24 00:44:57.863038 systemd[1]: Detected architecture x86-64. Jan 24 00:44:57.863051 systemd[1]: Running in initrd. Jan 24 00:44:57.863063 systemd[1]: No hostname configured, using default hostname. Jan 24 00:44:57.863076 systemd[1]: Hostname set to . Jan 24 00:44:57.863088 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:44:57.863187 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:44:57.863210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:44:57.863224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:44:57.863238 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:44:57.863252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:44:57.863264 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:44:57.863287 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:44:57.863303 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:44:57.863315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:44:57.863329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:44:57.863341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:44:57.863355 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:44:57.863366 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:44:57.863384 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:44:57.863398 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:44:57.863410 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:44:57.863424 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:44:57.863437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:44:57.863449 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:44:57.863463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:44:57.863476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:44:57.863493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:44:57.863506 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:44:57.863519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:44:57.863533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:44:57.863544 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:44:57.863558 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:44:57.863571 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:44:57.863584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:44:57.863633 systemd-journald[193]: Collecting audit messages is disabled. Jan 24 00:44:57.863670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:44:57.863684 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:44:57.863697 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:44:57.863711 systemd-journald[193]: Journal started Jan 24 00:44:57.863741 systemd-journald[193]: Runtime Journal (/run/log/journal/1b7467f2ce2943a6876fd3c99a7c4bb5) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:44:57.884270 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:44:57.884925 systemd-modules-load[194]: Inserted module 'overlay' Jan 24 00:44:57.894221 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:44:57.903050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:44:57.928539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:44:57.944357 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:44:57.952650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:44:57.983362 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:44:57.990885 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:44:57.993365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:44:58.037423 dracut-cmdline[219]: dracut-dracut-053 Jan 24 00:44:58.039252 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:44:58.048619 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:44:58.091830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:44:58.131947 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:44:58.135213 kernel: Bridge firewalling registered Jan 24 00:44:58.137892 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 24 00:44:58.150295 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:44:58.166997 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:44:58.193474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:44:58.225845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:44:58.259240 kernel: SCSI subsystem initialized Jan 24 00:44:58.261602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:44:58.283295 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:44:58.313204 kernel: iscsi: registered transport (tcp) Jan 24 00:44:58.348669 systemd-resolved[303]: Positive Trust Anchors: Jan 24 00:44:58.348956 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:44:58.350050 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:44:58.354093 systemd-resolved[303]: Defaulting to hostname 'linux'. Jan 24 00:44:58.357034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:44:58.400962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:44:58.440718 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:44:58.440765 kernel: QLogic iSCSI HBA Driver Jan 24 00:44:58.616937 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:44:58.641558 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:44:58.723430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:44:58.723519 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:44:58.729658 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:44:58.851099 kernel: raid6: avx2x4 gen() 12717 MB/s Jan 24 00:44:58.871917 kernel: raid6: avx2x2 gen() 15262 MB/s Jan 24 00:44:58.897058 kernel: raid6: avx2x1 gen() 9849 MB/s Jan 24 00:44:58.897219 kernel: raid6: using algorithm avx2x2 gen() 15262 MB/s Jan 24 00:44:58.920422 kernel: raid6: .... xor() 14958 MB/s, rmw enabled Jan 24 00:44:58.920502 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:44:58.971388 kernel: xor: automatically using best checksumming function avx Jan 24 00:44:59.523017 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:44:59.552727 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:44:59.579465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:44:59.624344 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 24 00:44:59.640440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:44:59.684302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:44:59.738618 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 24 00:44:59.909238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:44:59.950364 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:45:00.208915 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:45:00.259554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:45:00.307568 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:45:00.328471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:45:00.349576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:45:00.354334 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:45:00.420208 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:45:00.446766 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:45:00.481922 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:45:00.498393 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:45:00.501068 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:45:00.543841 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:45:00.543915 kernel: GPT:9289727 != 19775487 Jan 24 00:45:00.543930 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:45:00.543945 kernel: GPT:9289727 != 19775487 Jan 24 00:45:00.543957 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:45:00.545084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:45:00.591979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:45:00.592241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:45:00.618534 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:45:00.639842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:45:00.640190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:45:00.653001 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:45:00.662516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:45:00.808873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:45:00.847706 kernel: libata version 3.00 loaded. Jan 24 00:45:00.840067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:45:00.909998 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:45:00.979748 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Jan 24 00:45:00.993886 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:45:01.011553 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (467) Jan 24 00:45:01.031053 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:45:01.082589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:45:01.107234 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:45:01.117365 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:45:01.117734 kernel: AES CTR mode by8 optimization enabled Jan 24 00:45:01.117981 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:45:01.114334 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:45:01.187217 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:45:01.187533 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:45:01.187758 kernel: scsi host0: ahci Jan 24 00:45:01.190768 kernel: scsi host1: ahci Jan 24 00:45:01.191066 kernel: scsi host2: ahci Jan 24 00:45:01.191385 kernel: scsi host3: ahci Jan 24 00:45:01.131204 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:45:01.211327 kernel: scsi host4: ahci Jan 24 00:45:01.211605 kernel: scsi host5: ahci Jan 24 00:45:01.216926 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 24 00:45:01.216961 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 24 00:45:01.217011 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 24 00:45:01.239772 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 24 00:45:01.239973 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 24 00:45:01.244169 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 24 00:45:01.254781 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:45:01.304986 disk-uuid[562]: Primary Header is updated. Jan 24 00:45:01.304986 disk-uuid[562]: Secondary Entries is updated. Jan 24 00:45:01.304986 disk-uuid[562]: Secondary Header is updated. Jan 24 00:45:01.333364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:45:01.333411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:45:01.485965 kernel: hrtimer: interrupt took 4532991 ns Jan 24 00:45:01.588229 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:45:01.592219 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:45:01.597195 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:45:01.643320 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:45:01.665409 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:45:01.694490 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:45:01.694588 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:45:01.694605 kernel: ata3.00: applying bridge limits Jan 24 00:45:01.712851 kernel: ata3.00: configured for UDMA/100 Jan 24 00:45:01.726693 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:45:02.160556 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:45:02.192235 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:45:02.213253 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:45:02.351333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:45:02.356191 disk-uuid[564]: The operation has completed successfully. Jan 24 00:45:03.024776 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:45:03.025210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:45:03.112323 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:45:03.137217 sh[599]: Success Jan 24 00:45:03.230554 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:45:03.539334 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:45:03.557043 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:45:03.641984 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:45:03.696275 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:45:03.696384 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:45:03.704604 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:45:03.704651 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:45:03.708216 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:45:03.746976 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:45:03.806521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:45:03.843436 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:45:03.868273 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:45:03.931368 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:45:03.944222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:45:03.944313 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:45:04.004264 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:45:04.029802 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:45:04.043805 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:45:04.114527 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:45:04.134022 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:45:04.415380 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:45:04.746674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:45:04.786421 ignition[713]: Ignition 2.19.0 Jan 24 00:45:04.786472 ignition[713]: Stage: fetch-offline Jan 24 00:45:04.786551 ignition[713]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:04.786570 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:04.786741 ignition[713]: parsed url from cmdline: "" Jan 24 00:45:04.786747 ignition[713]: no config URL provided Jan 24 00:45:04.786756 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:45:04.830496 systemd-networkd[786]: lo: Link UP Jan 24 00:45:04.786769 ignition[713]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:45:04.830502 systemd-networkd[786]: lo: Gained carrier Jan 24 00:45:04.786817 ignition[713]: op(1): [started] loading QEMU firmware config module Jan 24 00:45:04.836319 systemd-networkd[786]: Enumeration completed Jan 24 00:45:04.786825 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:45:04.836467 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:45:04.817070 ignition[713]: op(1): [finished] loading QEMU firmware config module Jan 24 00:45:04.838078 unknown[713]: fetched base config from "system" Jan 24 00:45:04.818551 ignition[713]: parsing config with SHA512: 6b1f3e6d7852e0def1dae33c992b7d8ed9ef35210606cb262ce6688eb739a8492a282edc9babf4773f8285bbfa9bcc49360c6d5ba8b8703885b96753cdd7890f Jan 24 00:45:04.838089 unknown[713]: fetched user config from "qemu" Jan 24 00:45:04.838692 ignition[713]: fetch-offline: fetch-offline passed Jan 24 00:45:04.840530 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:45:04.838908 ignition[713]: Ignition finished successfully Jan 24 00:45:04.840536 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:45:04.846892 systemd-networkd[786]: eth0: Link UP Jan 24 00:45:04.846898 systemd-networkd[786]: eth0: Gained carrier Jan 24 00:45:04.846912 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:45:04.847316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:45:04.882762 systemd[1]: Reached target network.target - Network. Jan 24 00:45:04.902245 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:45:04.953761 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:45:05.001268 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:45:05.096364 ignition[790]: Ignition 2.19.0 Jan 24 00:45:05.096417 ignition[790]: Stage: kargs Jan 24 00:45:05.096760 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:05.096780 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:05.103440 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:45:05.098953 ignition[790]: kargs: kargs passed Jan 24 00:45:05.099018 ignition[790]: Ignition finished successfully Jan 24 00:45:05.137311 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:45:05.208316 ignition[799]: Ignition 2.19.0 Jan 24 00:45:05.209010 ignition[799]: Stage: disks Jan 24 00:45:05.211298 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:05.217771 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:45:05.211318 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:05.232212 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:45:05.213950 ignition[799]: disks: disks passed Jan 24 00:45:05.248540 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:45:05.214028 ignition[799]: Ignition finished successfully Jan 24 00:45:05.292434 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:45:05.300081 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:45:05.305830 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:45:05.341489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:45:05.401096 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:45:05.414402 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:45:05.438355 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:45:05.850737 systemd-resolved[303]: Detected conflict on linux IN A 10.0.0.146 Jan 24 00:45:05.862567 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:45:05.850794 systemd-resolved[303]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jan 24 00:45:05.854075 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:45:05.873206 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:45:05.908418 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:45:05.933054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:45:05.996643 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Jan 24 00:45:05.996682 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:45:05.996699 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:45:05.996716 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:45:05.949479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:45:05.949555 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:45:05.949604 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:45:05.976095 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:45:06.069185 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:45:06.046786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:45:06.064476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:45:06.080623 systemd-networkd[786]: eth0: Gained IPv6LL Jan 24 00:45:06.234966 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:45:06.261310 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:45:06.275157 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:45:06.293814 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:45:07.627064 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:45:07.660617 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:45:07.687771 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:45:07.708293 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:45:07.724362 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:45:08.208056 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:45:08.241775 ignition[931]: INFO : Ignition 2.19.0 Jan 24 00:45:08.251019 ignition[931]: INFO : Stage: mount Jan 24 00:45:08.251019 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:08.251019 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:08.286048 ignition[931]: INFO : mount: mount passed Jan 24 00:45:08.286048 ignition[931]: INFO : Ignition finished successfully Jan 24 00:45:08.255952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:45:08.295314 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:45:08.313335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:45:08.349282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 24 00:45:08.385471 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:45:08.385576 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:45:08.385595 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:45:08.406046 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:45:08.408772 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:45:08.606389 ignition[961]: INFO : Ignition 2.19.0 Jan 24 00:45:08.615832 ignition[961]: INFO : Stage: files Jan 24 00:45:08.620231 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:08.620231 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:08.648010 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:45:08.674495 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:45:08.674495 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:45:08.739217 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:45:08.776671 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:45:08.808778 unknown[961]: wrote ssh authorized keys file for user: core Jan 24 00:45:08.830082 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:45:08.854250 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:45:08.874571 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:45:08.884588 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:45:08.897801 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:45:08.897801 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:45:08.927533 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:45:08.927533 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:45:08.955096 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:45:09.426767 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 24 00:45:13.674459 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:45:13.724511 ignition[961]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 24 00:45:13.724511 ignition[961]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:45:13.724511 ignition[961]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:45:13.724511 ignition[961]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 24 00:45:13.724511 ignition[961]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:45:13.999218 ignition[961]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:45:14.020380 ignition[961]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:45:14.030363 ignition[961]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:45:14.044200 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:45:14.044200 ignition[961]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:45:14.044200 ignition[961]: INFO : files: files passed Jan 24 00:45:14.044200 ignition[961]: INFO : Ignition finished successfully Jan 24 00:45:14.043376 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:45:14.113283 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:45:14.136073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:45:14.179265 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:45:14.179518 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:45:14.192328 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:45:14.201337 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:45:14.202306 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:45:14.205068 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:45:14.236283 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:45:14.254025 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:45:14.283393 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:45:14.369532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:45:14.376889 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:45:14.400777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:45:14.401430 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:45:14.416200 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:45:14.444726 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:45:14.529792 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:45:14.569899 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:45:14.617908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:45:14.622235 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:45:14.623200 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:45:14.623819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:45:14.625230 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:45:14.645469 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:45:14.684891 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:45:14.695001 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:45:14.712875 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:45:14.724434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:45:14.743573 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:45:14.759453 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:45:14.767696 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:45:14.796672 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:45:14.818512 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:45:14.824272 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:45:14.825021 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:45:14.845507 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:45:14.854415 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:45:14.880790 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:45:14.893234 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:45:14.914702 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:45:14.914905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:45:14.952758 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:45:14.953071 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:45:14.974615 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:45:14.986738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:45:14.991383 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:45:15.117764 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:45:15.156909 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:45:15.189776 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:45:15.197526 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:45:15.216298 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:45:15.216504 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:45:15.231447 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:45:15.231649 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:45:15.231869 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:45:15.232747 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:45:15.301620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:45:15.328321 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:45:15.337678 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:45:15.341422 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:45:15.392677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:45:15.392883 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:45:15.430853 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:45:15.433187 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:45:15.499790 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:45:16.113800 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:45:16.114292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:45:16.148758 ignition[1015]: INFO : Ignition 2.19.0 Jan 24 00:45:16.148758 ignition[1015]: INFO : Stage: umount Jan 24 00:45:16.190687 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:45:16.190687 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:45:16.190687 ignition[1015]: INFO : umount: umount passed Jan 24 00:45:16.190687 ignition[1015]: INFO : Ignition finished successfully Jan 24 00:45:16.201263 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:45:16.201510 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:45:16.207391 systemd[1]: Stopped target network.target - Network. Jan 24 00:45:16.252591 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:45:16.252725 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:45:16.289096 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:45:16.289260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:45:16.294010 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:45:16.294233 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:45:16.310082 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:45:16.310273 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:45:16.326917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:45:16.327057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:45:16.340342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:45:16.346613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:45:16.398642 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:45:16.404931 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 24 00:45:16.406314 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:45:16.439389 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:45:16.439734 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:45:16.482648 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:45:16.482726 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:45:16.538652 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:45:16.538800 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:45:16.538868 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:45:16.555516 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:45:16.555623 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:45:16.577379 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:45:16.577477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:45:16.588738 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:45:16.588892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:45:16.593240 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:45:16.634410 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:45:16.634805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:45:16.694578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:45:16.696300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:45:16.701397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:45:16.701461 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:45:16.714764 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:45:16.714856 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:45:16.723521 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:45:16.723596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:45:16.736431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:45:16.736524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:45:16.799473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:45:16.813290 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:45:16.813389 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:45:16.820045 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:45:16.820395 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:45:16.830071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:45:16.830266 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:45:16.841050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:45:16.841781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:45:16.855544 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:45:16.855756 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:45:16.872448 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:45:16.872652 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:45:16.890537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:45:16.932197 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:45:16.966473 systemd[1]: Switching root. Jan 24 00:45:17.055914 systemd-journald[193]: Journal stopped Jan 24 00:45:19.608469 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 24 00:45:19.608578 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:45:19.608610 kernel: SELinux: policy capability open_perms=1 Jan 24 00:45:19.608628 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:45:19.608645 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:45:19.608739 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:45:19.608761 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:45:19.608779 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:45:19.608796 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:45:19.608813 kernel: audit: type=1403 audit(1769215517.441:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:45:19.608838 systemd[1]: Successfully loaded SELinux policy in 110.945ms. Jan 24 00:45:19.608866 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.228ms. Jan 24 00:45:19.608888 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:45:19.608907 systemd[1]: Detected virtualization kvm. Jan 24 00:45:19.608924 systemd[1]: Detected architecture x86-64. Jan 24 00:45:19.608949 systemd[1]: Detected first boot. Jan 24 00:45:19.608967 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:45:19.609040 zram_generator::config[1074]: No configuration found. Jan 24 00:45:19.609065 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:45:19.609085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:45:19.609183 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:45:19.609202 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:45:19.609214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:45:19.609261 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:45:19.609272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:45:19.609389 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:45:19.609405 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:45:19.609416 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:45:19.609427 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:45:19.609437 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:45:19.609455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:45:19.609471 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:45:19.609498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:45:19.609510 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:45:19.609521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:45:19.609532 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:45:19.609543 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:45:19.609554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:45:19.609573 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:45:19.609628 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:45:19.609651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:45:19.609765 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:45:19.609829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:45:19.609850 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:45:19.609861 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:45:19.609872 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:45:19.609883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:45:19.609893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:45:19.609904 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:45:19.609917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:45:19.609928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:45:19.609939 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:45:19.609950 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:45:19.609962 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:45:19.609973 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:45:19.609983 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:45:19.610040 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:45:19.610051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:45:19.610098 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:45:19.610201 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:45:19.610225 systemd[1]: Reached target machines.target - Containers. Jan 24 00:45:19.610245 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:45:19.610326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:45:19.610375 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:45:19.610387 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:45:19.610397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:45:19.610415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:45:19.610434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:45:19.610453 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:45:19.610471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:45:19.610488 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:45:19.610508 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:45:19.610526 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:45:19.610545 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:45:19.610569 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:45:19.610586 kernel: ACPI: bus type drm_connector registered Jan 24 00:45:19.610603 kernel: fuse: init (API version 7.39) Jan 24 00:45:19.610614 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:45:19.610625 kernel: loop: module loaded Jan 24 00:45:19.610635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:45:19.610646 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:45:19.610683 systemd-journald[1158]: Collecting audit messages is disabled. Jan 24 00:45:19.610721 systemd-journald[1158]: Journal started Jan 24 00:45:19.610804 systemd-journald[1158]: Runtime Journal (/run/log/journal/1b7467f2ce2943a6876fd3c99a7c4bb5) is 6.0M, max 48.3M, 42.2M free. Jan 24 00:45:18.774346 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:45:18.814029 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:45:18.815581 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:45:18.817546 systemd[1]: systemd-journald.service: Consumed 2.230s CPU time. Jan 24 00:45:19.623279 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:45:19.633595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:45:19.640842 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:45:19.640896 systemd[1]: Stopped verity-setup.service. Jan 24 00:45:19.653297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:45:19.664278 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:45:19.665881 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:45:19.670853 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:45:19.675854 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:45:19.680854 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:45:19.686234 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:45:19.691501 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:45:19.696861 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:45:19.702813 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:45:19.709806 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:45:19.710206 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:45:19.716355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:45:19.717862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:45:19.725338 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:45:19.725541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:45:19.731295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:45:19.731619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:45:19.739735 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:45:19.740191 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:45:19.746936 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:45:19.747400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:45:19.754484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:45:19.762740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:45:19.770367 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:45:19.779285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:45:19.808640 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:45:19.826599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:45:19.837229 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:45:19.849274 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:45:19.849387 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:45:19.861600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:45:19.889421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:45:19.899665 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:45:19.907585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:45:19.913485 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:45:19.925680 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:45:19.933476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:45:19.935352 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:45:19.940252 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:45:19.943568 systemd-journald[1158]: Time spent on flushing to /var/log/journal/1b7467f2ce2943a6876fd3c99a7c4bb5 is 32.200ms for 968 entries. Jan 24 00:45:19.943568 systemd-journald[1158]: System Journal (/var/log/journal/1b7467f2ce2943a6876fd3c99a7c4bb5) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:45:20.003530 systemd-journald[1158]: Received client request to flush runtime journal. Jan 24 00:45:19.944672 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:45:19.959848 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:45:19.981075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:45:19.995863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:45:20.005975 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:45:20.010595 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:45:20.016554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:45:20.024447 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:45:20.032921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:45:20.045307 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:45:20.052777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:45:20.072909 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:45:20.074242 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:45:20.087846 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:45:20.101380 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 00:45:20.101467 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 00:45:20.105424 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:45:20.118893 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:45:20.140544 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:45:20.153225 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:45:20.179619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:45:20.182735 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:45:20.209497 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:45:20.230689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:45:20.260043 kernel: loop2: detected capacity change from 0 to 219144 Jan 24 00:45:20.266281 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 24 00:45:20.266349 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 24 00:45:20.275401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:45:20.350349 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:45:20.413686 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:45:20.447205 kernel: loop5: detected capacity change from 0 to 219144 Jan 24 00:45:20.493721 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:45:20.496476 (sd-merge)[1215]: Merged extensions into '/usr'. Jan 24 00:45:20.504364 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:45:20.505401 systemd[1]: Reloading... Jan 24 00:45:20.588273 zram_generator::config[1241]: No configuration found. Jan 24 00:45:20.777643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:45:20.836645 systemd[1]: Reloading finished in 330 ms. Jan 24 00:45:20.841100 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:45:20.895633 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:45:20.901046 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:45:20.907210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:45:20.936701 systemd[1]: Starting ensure-sysext.service... Jan 24 00:45:20.942250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:45:20.950822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:45:20.967052 systemd[1]: Reloading requested from client PID 1279 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:45:20.967185 systemd[1]: Reloading... Jan 24 00:45:20.979325 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:45:20.979898 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:45:20.981950 systemd-tmpfiles[1280]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:45:20.982602 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 24 00:45:20.982776 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 24 00:45:20.989613 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:45:20.989731 systemd-tmpfiles[1280]: Skipping /boot Jan 24 00:45:21.003513 systemd-udevd[1281]: Using default interface naming scheme 'v255'. Jan 24 00:45:21.010938 systemd-tmpfiles[1280]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:45:21.013313 systemd-tmpfiles[1280]: Skipping /boot Jan 24 00:45:21.056176 zram_generator::config[1308]: No configuration found. Jan 24 00:45:21.149175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1345) Jan 24 00:45:21.226260 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:45:21.241643 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:45:21.244373 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:45:21.332505 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 24 00:45:21.332922 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:45:21.333320 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:45:21.333589 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:45:21.360220 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:45:21.537197 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:45:21.539341 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:45:21.539833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:45:21.548213 systemd[1]: Reloading finished in 580 ms. Jan 24 00:45:21.572700 kernel: kvm_amd: TSC scaling supported Jan 24 00:45:21.572804 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:45:21.573081 kernel: kvm_amd: Nested Paging enabled Jan 24 00:45:21.579192 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:45:21.579259 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:45:21.696778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:45:21.746695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:45:21.767264 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:45:21.804003 systemd[1]: Finished ensure-sysext.service. Jan 24 00:45:21.809505 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:45:21.835941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:45:21.856700 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:45:21.867815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:45:21.875594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:45:21.880298 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:45:21.895513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:45:21.902407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:45:21.918549 lvm[1383]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:45:21.921803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:45:21.931793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:45:21.939259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:45:21.941468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:45:21.953377 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:45:21.966349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:45:21.982461 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:45:21.994866 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:45:21.999182 augenrules[1404]: No rules Jan 24 00:45:22.007840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:45:22.013619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:45:22.014192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:45:22.019323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:45:22.019844 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:45:22.021823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:45:22.022510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:45:22.028495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:45:22.028804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:45:22.032485 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:45:22.032820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:45:22.048816 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:45:22.049322 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:45:22.057778 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:45:22.074898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:45:22.083781 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:45:22.103227 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:45:22.114549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:45:22.130426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:45:22.130628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:45:22.130706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:45:22.132675 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:45:22.142307 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:45:22.147198 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:45:22.148725 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:45:22.167341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:45:22.188918 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:45:22.208787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:45:22.226804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:45:22.339295 systemd-networkd[1402]: lo: Link UP Jan 24 00:45:22.339308 systemd-networkd[1402]: lo: Gained carrier Jan 24 00:45:22.341810 systemd-networkd[1402]: Enumeration completed Jan 24 00:45:22.341990 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:45:22.344558 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:45:22.344602 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:45:22.345900 systemd-resolved[1405]: Positive Trust Anchors: Jan 24 00:45:22.345913 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:45:22.345957 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:45:22.347062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:45:22.348556 systemd-networkd[1402]: eth0: Link UP Jan 24 00:45:22.348562 systemd-networkd[1402]: eth0: Gained carrier Jan 24 00:45:22.348576 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:45:22.352510 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:45:22.356059 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jan 24 00:45:22.366586 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:45:22.374712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:45:22.381606 systemd[1]: Reached target network.target - Network. Jan 24 00:45:22.386867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:45:22.386965 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:45:22.388492 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 24 00:45:22.393084 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:45:22.393238 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2026-01-24 00:45:22.086283 UTC. Jan 24 00:45:22.393839 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:45:22.400084 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:45:22.405967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:45:22.411901 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:45:22.419289 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:45:22.425522 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:45:22.431233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:45:22.431278 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:45:22.435869 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:45:22.443826 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:45:22.451733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:45:22.468515 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:45:22.475683 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:45:22.482827 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:45:22.489336 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:45:22.494897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:45:22.494967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:45:22.509649 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:45:22.517077 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:45:22.524870 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:45:22.533474 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:45:22.539512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:45:22.542381 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:45:22.563383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:45:22.566932 jq[1448]: false Jan 24 00:45:22.571896 dbus-daemon[1447]: [system] SELinux support is enabled Jan 24 00:45:22.575363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:45:22.584395 extend-filesystems[1449]: Found loop3 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found loop4 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found loop5 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found sr0 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda1 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda2 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda3 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found usr Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda4 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda6 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda7 Jan 24 00:45:22.589507 extend-filesystems[1449]: Found vda9 Jan 24 00:45:22.589507 extend-filesystems[1449]: Checking size of /dev/vda9 Jan 24 00:45:22.714245 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:45:22.714293 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1351) Jan 24 00:45:22.587230 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:45:22.714486 extend-filesystems[1449]: Resized partition /dev/vda9 Jan 24 00:45:22.723905 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:45:22.594857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:45:22.744310 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:45:22.595723 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:45:22.751241 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:45:22.751241 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:45:22.751241 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:45:22.626348 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:45:22.767812 update_engine[1462]: I20260124 00:45:22.680394 1462 main.cc:92] Flatcar Update Engine starting Jan 24 00:45:22.767812 update_engine[1462]: I20260124 00:45:22.690416 1462 update_check_scheduler.cc:74] Next update check in 4m33s Jan 24 00:45:22.768429 extend-filesystems[1449]: Resized filesystem in /dev/vda9 Jan 24 00:45:22.785977 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:45:22.658573 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:45:22.786534 jq[1468]: true Jan 24 00:45:22.668542 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:45:22.697191 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:45:22.697661 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:45:22.714803 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:45:22.715165 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:45:22.735496 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:45:22.735873 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:45:22.762604 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:45:22.762877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:45:22.766605 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:45:22.766637 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:45:22.767484 systemd-logind[1460]: New seat seat0. Jan 24 00:45:22.774775 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:45:22.789817 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:45:22.794980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:45:22.795427 jq[1473]: true Jan 24 00:45:22.826074 dbus-daemon[1447]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:45:22.837005 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:45:22.857627 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:45:22.863618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:45:22.863835 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:45:22.873302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:45:22.873715 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:45:22.881597 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:45:22.889740 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:45:22.901635 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:45:22.911624 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:45:22.912564 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:45:22.913328 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:45:22.933593 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:45:22.942950 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:45:22.956431 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:45:22.977963 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:45:22.989254 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:45:22.995302 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:45:23.077213 containerd[1480]: time="2026-01-24T00:45:23.076670457Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:45:23.106100 containerd[1480]: time="2026-01-24T00:45:23.104610966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.109631 containerd[1480]: time="2026-01-24T00:45:23.109527577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:45:23.109631 containerd[1480]: time="2026-01-24T00:45:23.109617710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:45:23.109781 containerd[1480]: time="2026-01-24T00:45:23.109642375Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:45:23.110736 containerd[1480]: time="2026-01-24T00:45:23.110605874Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:45:23.110771 containerd[1480]: time="2026-01-24T00:45:23.110733418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.110990 containerd[1480]: time="2026-01-24T00:45:23.110918984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:45:23.111551 containerd[1480]: time="2026-01-24T00:45:23.110986688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.111939 containerd[1480]: time="2026-01-24T00:45:23.111861081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:45:23.111939 containerd[1480]: time="2026-01-24T00:45:23.111928553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112002 containerd[1480]: time="2026-01-24T00:45:23.111953613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112002 containerd[1480]: time="2026-01-24T00:45:23.111971669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112254 containerd[1480]: time="2026-01-24T00:45:23.112192777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112695 containerd[1480]: time="2026-01-24T00:45:23.112634135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112962 containerd[1480]: time="2026-01-24T00:45:23.112898475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:45:23.112987 containerd[1480]: time="2026-01-24T00:45:23.112963510Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:45:23.113288 containerd[1480]: time="2026-01-24T00:45:23.113233014Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:45:23.113407 containerd[1480]: time="2026-01-24T00:45:23.113350962Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:45:23.124543 containerd[1480]: time="2026-01-24T00:45:23.124319670Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:45:23.124543 containerd[1480]: time="2026-01-24T00:45:23.124522453Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:45:23.124543 containerd[1480]: time="2026-01-24T00:45:23.124555249Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:45:23.124702 containerd[1480]: time="2026-01-24T00:45:23.124580261Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:45:23.124702 containerd[1480]: time="2026-01-24T00:45:23.124603201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:45:23.125190 containerd[1480]: time="2026-01-24T00:45:23.124862501Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:45:23.125596 containerd[1480]: time="2026-01-24T00:45:23.125392193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:45:23.126494 containerd[1480]: time="2026-01-24T00:45:23.126363926Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:45:23.126494 containerd[1480]: time="2026-01-24T00:45:23.126439617Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:45:23.126494 containerd[1480]: time="2026-01-24T00:45:23.126462643Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:45:23.126494 containerd[1480]: time="2026-01-24T00:45:23.126484486Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126583 containerd[1480]: time="2026-01-24T00:45:23.126503909Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126583 containerd[1480]: time="2026-01-24T00:45:23.126522004Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126583 containerd[1480]: time="2026-01-24T00:45:23.126541841Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126583 containerd[1480]: time="2026-01-24T00:45:23.126565051Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126645 containerd[1480]: time="2026-01-24T00:45:23.126583376Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126694 containerd[1480]: time="2026-01-24T00:45:23.126672508Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126719 containerd[1480]: time="2026-01-24T00:45:23.126700680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:45:23.126811 containerd[1480]: time="2026-01-24T00:45:23.126730943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126811 containerd[1480]: time="2026-01-24T00:45:23.126750367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126811 containerd[1480]: time="2026-01-24T00:45:23.126768335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126811 containerd[1480]: time="2026-01-24T00:45:23.126784648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126811 containerd[1480]: time="2026-01-24T00:45:23.126802644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126901 containerd[1480]: time="2026-01-24T00:45:23.126819429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126901 containerd[1480]: time="2026-01-24T00:45:23.126837233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126901 containerd[1480]: time="2026-01-24T00:45:23.126856330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126901 containerd[1480]: time="2026-01-24T00:45:23.126873133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126962 containerd[1480]: time="2026-01-24T00:45:23.126917953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126962 containerd[1480]: time="2026-01-24T00:45:23.126935807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.126962 containerd[1480]: time="2026-01-24T00:45:23.126953014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.127060 containerd[1480]: time="2026-01-24T00:45:23.127021239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.127060 containerd[1480]: time="2026-01-24T00:45:23.127045392Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127178043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127239562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127261326Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127372098Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127412082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:45:23.127434 containerd[1480]: time="2026-01-24T00:45:23.127429164Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:45:23.127604 containerd[1480]: time="2026-01-24T00:45:23.127447519Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:45:23.127604 containerd[1480]: time="2026-01-24T00:45:23.127460825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.127604 containerd[1480]: time="2026-01-24T00:45:23.127485884Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:45:23.127604 containerd[1480]: time="2026-01-24T00:45:23.127509307Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:45:23.127604 containerd[1480]: time="2026-01-24T00:45:23.127524115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:45:23.128212 containerd[1480]: time="2026-01-24T00:45:23.127972257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:45:23.128460 containerd[1480]: time="2026-01-24T00:45:23.128229919Z" level=info msg="Connect containerd service" Jan 24 00:45:23.128460 containerd[1480]: time="2026-01-24T00:45:23.128283787Z" level=info msg="using legacy CRI server" Jan 24 00:45:23.128460 containerd[1480]: time="2026-01-24T00:45:23.128296293Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:45:23.128460 containerd[1480]: time="2026-01-24T00:45:23.128423780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:45:23.132909 containerd[1480]: time="2026-01-24T00:45:23.132016406Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:45:23.132909 containerd[1480]: time="2026-01-24T00:45:23.132818302Z" level=info msg="Start subscribing containerd event" Jan 24 00:45:23.133061 containerd[1480]: time="2026-01-24T00:45:23.133008204Z" level=info msg="Start recovering state" Jan 24 00:45:23.133270 containerd[1480]: time="2026-01-24T00:45:23.133208492Z" level=info msg="Start event monitor" Jan 24 00:45:23.133270 containerd[1480]: time="2026-01-24T00:45:23.133246240Z" level=info msg="Start snapshots syncer" Jan 24 00:45:23.133270 containerd[1480]: time="2026-01-24T00:45:23.133260933Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:45:23.133270 containerd[1480]: time="2026-01-24T00:45:23.133271791Z" level=info msg="Start streaming server" Jan 24 00:45:23.134391 containerd[1480]: time="2026-01-24T00:45:23.133819244Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:45:23.134391 containerd[1480]: time="2026-01-24T00:45:23.133948562Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:45:23.134391 containerd[1480]: time="2026-01-24T00:45:23.134064526Z" level=info msg="containerd successfully booted in 0.058803s" Jan 24 00:45:23.134416 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:45:24.190988 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 24 00:45:24.199568 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:45:24.209030 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:45:24.236667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:45:24.250523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:24.270020 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:45:24.324718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:45:24.336735 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:45:24.337278 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:45:24.345499 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:45:25.255052 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:45:25.281595 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:40114.service - OpenSSH per-connection server daemon (10.0.0.1:40114). Jan 24 00:45:25.369296 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 40114 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:25.372669 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:25.392540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:45:25.411580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:45:25.422815 systemd-logind[1460]: New session 1 of user core. Jan 24 00:45:25.445033 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:45:25.464390 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:45:25.490712 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:45:25.690274 systemd[1552]: Queued start job for default target default.target. Jan 24 00:45:25.703295 systemd[1552]: Created slice app.slice - User Application Slice. Jan 24 00:45:25.703378 systemd[1552]: Reached target paths.target - Paths. Jan 24 00:45:25.703401 systemd[1552]: Reached target timers.target - Timers. Jan 24 00:45:25.706562 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:45:25.740689 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:45:25.740970 systemd[1552]: Reached target sockets.target - Sockets. Jan 24 00:45:25.740999 systemd[1552]: Reached target basic.target - Basic System. Jan 24 00:45:25.741606 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:45:25.747423 systemd[1552]: Reached target default.target - Main User Target. Jan 24 00:45:25.747556 systemd[1552]: Startup finished in 239ms. Jan 24 00:45:25.756702 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:45:25.861332 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Jan 24 00:45:25.955547 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:25.960613 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:25.974010 systemd-logind[1460]: New session 2 of user core. Jan 24 00:45:25.992559 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:45:26.190477 sshd[1563]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:26.227264 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:40120.service: Deactivated successfully. Jan 24 00:45:26.237630 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:45:26.252768 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:45:26.318733 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:40134.service - OpenSSH per-connection server daemon (10.0.0.1:40134). Jan 24 00:45:26.484466 systemd-logind[1460]: Removed session 2. Jan 24 00:45:26.587685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:26.610658 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:45:26.626399 systemd[1]: Startup finished in 2.468s (kernel) + 20.395s (initrd) + 9.290s (userspace) = 32.155s. Jan 24 00:45:26.672679 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:45:26.689069 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 40134 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:26.692841 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:26.701361 systemd-logind[1460]: New session 3 of user core. Jan 24 00:45:26.710588 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:45:26.861793 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:26.894601 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:40134.service: Deactivated successfully. Jan 24 00:45:26.897933 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:45:26.901868 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:45:26.915961 systemd-logind[1460]: Removed session 3. Jan 24 00:45:28.245052 kubelet[1576]: E0124 00:45:28.244516 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:45:28.254325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:45:28.255901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:28.258755 systemd[1]: kubelet.service: Consumed 2.454s CPU time. Jan 24 00:45:36.727604 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:40018.service - OpenSSH per-connection server daemon (10.0.0.1:40018). Jan 24 00:45:36.779552 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 40018 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:36.781940 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:36.790561 systemd-logind[1460]: New session 4 of user core. Jan 24 00:45:36.801536 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:45:36.868822 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:36.878386 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:40018.service: Deactivated successfully. Jan 24 00:45:36.880786 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:45:36.883003 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:45:36.891730 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:40032.service - OpenSSH per-connection server daemon (10.0.0.1:40032). Jan 24 00:45:36.893727 systemd-logind[1460]: Removed session 4. Jan 24 00:45:36.938535 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 40032 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:36.941434 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:36.966250 systemd-logind[1460]: New session 5 of user core. Jan 24 00:45:36.981927 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:45:37.080950 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.103718 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:40032.service: Deactivated successfully. Jan 24 00:45:37.105813 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:45:37.110438 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:45:37.129950 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:40042.service - OpenSSH per-connection server daemon (10.0.0.1:40042). Jan 24 00:45:37.135315 systemd-logind[1460]: Removed session 5. Jan 24 00:45:37.182027 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 40042 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:37.184740 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:37.210666 systemd-logind[1460]: New session 6 of user core. Jan 24 00:45:37.219640 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:45:37.310347 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.328613 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:40042.service: Deactivated successfully. Jan 24 00:45:37.331829 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:45:37.334832 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:45:37.346759 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:40048.service - OpenSSH per-connection server daemon (10.0.0.1:40048). Jan 24 00:45:37.349331 systemd-logind[1460]: Removed session 6. Jan 24 00:45:37.409087 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 40048 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:37.412527 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:37.424880 systemd-logind[1460]: New session 7 of user core. Jan 24 00:45:37.439576 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:45:37.528831 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:45:37.529722 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:45:37.559864 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:37.565361 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.580979 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:40048.service: Deactivated successfully. Jan 24 00:45:37.584849 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:45:37.588066 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:45:37.605994 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:40056.service - OpenSSH per-connection server daemon (10.0.0.1:40056). Jan 24 00:45:37.608897 systemd-logind[1460]: Removed session 7. Jan 24 00:45:37.659429 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 40056 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:37.662550 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:37.674798 systemd-logind[1460]: New session 8 of user core. Jan 24 00:45:37.684780 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:45:37.765012 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:45:37.765630 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:45:37.776228 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:37.790498 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:45:37.791225 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:45:37.836883 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:45:37.850362 auditctl[1630]: No rules Jan 24 00:45:37.855710 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:45:37.856089 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:45:37.867308 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:45:37.953526 augenrules[1648]: No rules Jan 24 00:45:37.956590 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:45:37.959380 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:37.962452 sshd[1623]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:37.975300 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:40056.service: Deactivated successfully. Jan 24 00:45:37.977436 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:45:37.981208 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:45:38.023684 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:40060.service - OpenSSH per-connection server daemon (10.0.0.1:40060). Jan 24 00:45:38.035698 systemd-logind[1460]: Removed session 8. Jan 24 00:45:38.081068 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 40060 ssh2: RSA SHA256:Zrjt90rRKdcSRj4kE3qB6mS1njloXkkpydTYwf9ROsM Jan 24 00:45:38.083922 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:45:38.123322 systemd-logind[1460]: New session 9 of user core. Jan 24 00:45:38.134664 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:45:38.217935 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:45:38.218690 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:45:38.258816 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:45:38.260030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:45:38.263819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:38.336292 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:45:38.336691 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:45:38.591956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:38.606924 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:45:39.228238 kubelet[1687]: E0124 00:45:39.227946 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:45:39.241707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:45:39.242044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:45:42.058512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:42.080490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:42.168593 systemd[1]: Reloading requested from client PID 1718 ('systemctl') (unit session-9.scope)... Jan 24 00:45:42.169373 systemd[1]: Reloading... Jan 24 00:45:42.411260 zram_generator::config[1762]: No configuration found. Jan 24 00:45:42.719823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:45:42.846285 systemd[1]: Reloading finished in 669 ms. Jan 24 00:45:42.955782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:42.986290 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:42.988273 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:45:42.988727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:43.000775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:45:43.329085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:45:43.337356 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:45:43.529561 kubelet[1807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:45:43.529561 kubelet[1807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:45:43.531505 kubelet[1807]: I0124 00:45:43.529696 1807 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:45:44.714199 kubelet[1807]: I0124 00:45:44.712060 1807 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:45:44.714199 kubelet[1807]: I0124 00:45:44.712730 1807 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:45:44.715769 kubelet[1807]: I0124 00:45:44.715747 1807 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:45:44.715769 kubelet[1807]: I0124 00:45:44.715766 1807 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:45:44.716985 kubelet[1807]: I0124 00:45:44.716648 1807 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:45:44.732024 kubelet[1807]: I0124 00:45:44.729844 1807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:45:44.745867 kubelet[1807]: E0124 00:45:44.745555 1807 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:45:44.745867 kubelet[1807]: I0124 00:45:44.745773 1807 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:45:44.774016 kubelet[1807]: I0124 00:45:44.772942 1807 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:45:44.777568 kubelet[1807]: I0124 00:45:44.776794 1807 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:45:44.777568 kubelet[1807]: I0124 00:45:44.776870 1807 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:45:44.777568 kubelet[1807]: I0124 00:45:44.777229 1807 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:45:44.777568 kubelet[1807]: I0124 00:45:44.777250 1807 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:45:44.777953 kubelet[1807]: I0124 00:45:44.777419 1807 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:45:44.869569 kubelet[1807]: I0124 00:45:44.869382 1807 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:44.873614 kubelet[1807]: I0124 00:45:44.871514 1807 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:45:44.873614 kubelet[1807]: I0124 00:45:44.871582 1807 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:45:44.873614 kubelet[1807]: I0124 00:45:44.871623 1807 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:45:44.873614 kubelet[1807]: I0124 00:45:44.871765 1807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:45:44.873614 kubelet[1807]: E0124 00:45:44.871917 1807 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:44.873614 kubelet[1807]: E0124 00:45:44.872021 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:44.932676 kubelet[1807]: I0124 00:45:44.932211 1807 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:45:44.934597 kubelet[1807]: E0124 00:45:44.933251 1807 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:45:44.934597 kubelet[1807]: I0124 00:45:44.933302 1807 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:45:44.934597 kubelet[1807]: I0124 00:45:44.933349 1807 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:45:44.934597 kubelet[1807]: W0124 00:45:44.933477 1807 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:45:44.934597 kubelet[1807]: E0124 00:45:44.934433 1807 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.146\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:45:44.949267 kubelet[1807]: I0124 00:45:44.948616 1807 server.go:1262] "Started kubelet" Jan 24 00:45:44.951511 kubelet[1807]: I0124 00:45:44.949742 1807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:45:44.976991 kubelet[1807]: I0124 00:45:44.972501 1807 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:45:44.976991 kubelet[1807]: I0124 00:45:44.972861 1807 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:45:44.999364 kubelet[1807]: I0124 00:45:44.990559 1807 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:45:45.007289 kubelet[1807]: I0124 00:45:44.995263 1807 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:45:45.008667 kubelet[1807]: I0124 00:45:45.000280 1807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:45:45.014720 kubelet[1807]: I0124 00:45:45.014606 1807 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:45:45.014901 kubelet[1807]: E0124 00:45:45.014792 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.015363 kubelet[1807]: I0124 00:45:45.015067 1807 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:45:45.015363 kubelet[1807]: I0124 00:45:45.015210 1807 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:45:45.032222 kubelet[1807]: I0124 00:45:45.030227 1807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:45:45.032851 kubelet[1807]: I0124 00:45:45.032641 1807 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:45:45.034475 kubelet[1807]: I0124 00:45:45.033039 1807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:45:45.043879 kubelet[1807]: E0124 00:45:45.043609 1807 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:45:45.047565 kubelet[1807]: E0124 00:45:45.047223 1807 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.146\" not found" node="10.0.0.146" Jan 24 00:45:45.053353 kubelet[1807]: I0124 00:45:45.052798 1807 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:45:45.105632 kubelet[1807]: I0124 00:45:45.102287 1807 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:45:45.105632 kubelet[1807]: I0124 00:45:45.104626 1807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:45:45.105632 kubelet[1807]: I0124 00:45:45.104731 1807 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:45:45.111613 kubelet[1807]: I0124 00:45:45.111580 1807 policy_none.go:49] "None policy: Start" Jan 24 00:45:45.111753 kubelet[1807]: I0124 00:45:45.111739 1807 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:45:45.111830 kubelet[1807]: I0124 00:45:45.111815 1807 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:45:45.116540 kubelet[1807]: E0124 00:45:45.116268 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.117910 kubelet[1807]: I0124 00:45:45.117689 1807 policy_none.go:47] "Start" Jan 24 00:45:45.144810 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:45:45.199785 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:45:45.206475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:45:45.216665 kubelet[1807]: E0124 00:45:45.216619 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.221368 kubelet[1807]: E0124 00:45:45.220991 1807 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:45:45.222462 kubelet[1807]: I0124 00:45:45.222363 1807 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:45:45.222462 kubelet[1807]: I0124 00:45:45.222436 1807 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:45:45.223259 kubelet[1807]: I0124 00:45:45.222972 1807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:45:45.228270 kubelet[1807]: E0124 00:45:45.227559 1807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:45:45.228270 kubelet[1807]: E0124 00:45:45.227669 1807 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.146\" not found" Jan 24 00:45:45.328992 kubelet[1807]: I0124 00:45:45.328418 1807 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.146" Jan 24 00:45:45.369445 kubelet[1807]: I0124 00:45:45.368894 1807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:45:45.383950 kubelet[1807]: I0124 00:45:45.378205 1807 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:45:45.383950 kubelet[1807]: I0124 00:45:45.378282 1807 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:45:45.383950 kubelet[1807]: I0124 00:45:45.378315 1807 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:45:45.383950 kubelet[1807]: E0124 00:45:45.378475 1807 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:45:45.383950 kubelet[1807]: I0124 00:45:45.383630 1807 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.146" Jan 24 00:45:45.383950 kubelet[1807]: E0124 00:45:45.383658 1807 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.0.0.146\": node \"10.0.0.146\" not found" Jan 24 00:45:45.399485 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:45.403999 sshd[1656]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:45.418262 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:40060.service: Deactivated successfully. Jan 24 00:45:45.422476 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:45:45.422884 systemd[1]: session-9.scope: Consumed 2.870s CPU time, 81.5M memory peak, 0B memory swap peak. Jan 24 00:45:45.424256 kubelet[1807]: E0124 00:45:45.424072 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.425065 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:45:45.433344 systemd-logind[1460]: Removed session 9. Jan 24 00:45:45.526652 kubelet[1807]: E0124 00:45:45.526288 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.627689 kubelet[1807]: E0124 00:45:45.627380 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.726682 kubelet[1807]: I0124 00:45:45.723354 1807 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:45:45.728648 kubelet[1807]: I0124 00:45:45.727907 1807 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:45:45.729271 kubelet[1807]: E0124 00:45:45.728910 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.733243 kubelet[1807]: I0124 00:45:45.727907 1807 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:45:45.840572 kubelet[1807]: E0124 00:45:45.830385 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:45.875562 kubelet[1807]: E0124 00:45:45.874458 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:45.936932 kubelet[1807]: E0124 00:45:45.936370 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.037711 kubelet[1807]: E0124 00:45:46.036854 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.138687 kubelet[1807]: E0124 00:45:46.137572 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.244487 kubelet[1807]: E0124 00:45:46.241948 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.346869 kubelet[1807]: E0124 00:45:46.346413 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.449642 kubelet[1807]: E0124 00:45:46.448004 1807 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.146\" not found" Jan 24 00:45:46.554014 kubelet[1807]: I0124 00:45:46.551527 1807 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 24 00:45:46.555256 containerd[1480]: time="2026-01-24T00:45:46.555046690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:45:46.573055 kubelet[1807]: I0124 00:45:46.557679 1807 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 24 00:45:46.882205 kubelet[1807]: E0124 00:45:46.880521 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:46.882205 kubelet[1807]: I0124 00:45:46.881581 1807 apiserver.go:52] "Watching apiserver" Jan 24 00:45:46.929056 kubelet[1807]: I0124 00:45:46.929011 1807 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:45:46.930656 systemd[1]: Created slice kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice - libcontainer container kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice. Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947273 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0-kube-proxy\") pod \"kube-proxy-7scf7\" (UID: \"91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0\") " pod="kube-system/kube-proxy-7scf7" Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947367 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-run\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947399 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-bpf-maps\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947419 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-lib-modules\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947441 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-hostproc\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.948912 kubelet[1807]: I0124 00:45:46.947463 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0-xtables-lock\") pod \"kube-proxy-7scf7\" (UID: \"91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0\") " pod="kube-system/kube-proxy-7scf7" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947488 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0-lib-modules\") pod \"kube-proxy-7scf7\" (UID: \"91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0\") " pod="kube-system/kube-proxy-7scf7" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947506 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cni-path\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947543 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-xtables-lock\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947566 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71ca35ef-dd98-4a7a-96f3-457de11743ce-clustermesh-secrets\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947587 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-net\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949370 kubelet[1807]: I0124 00:45:46.947611 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-kernel\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949640 kubelet[1807]: I0124 00:45:46.947634 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-hubble-tls\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949640 kubelet[1807]: I0124 00:45:46.947656 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffprh\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-kube-api-access-ffprh\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949640 kubelet[1807]: I0124 00:45:46.947677 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6knm\" (UniqueName: \"kubernetes.io/projected/91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0-kube-api-access-m6knm\") pod \"kube-proxy-7scf7\" (UID: \"91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0\") " pod="kube-system/kube-proxy-7scf7" Jan 24 00:45:46.949640 kubelet[1807]: I0124 00:45:46.947698 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-cgroup\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949640 kubelet[1807]: I0124 00:45:46.947720 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-etc-cni-netd\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.949814 kubelet[1807]: I0124 00:45:46.947742 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-config-path\") pod \"cilium-gpjkz\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " pod="kube-system/cilium-gpjkz" Jan 24 00:45:46.958522 systemd[1]: Created slice kubepods-besteffort-pod91ddc6ff_5197_4b82_8c84_91e5c3fa5ae0.slice - libcontainer container kubepods-besteffort-pod91ddc6ff_5197_4b82_8c84_91e5c3fa5ae0.slice. Jan 24 00:45:47.265855 kubelet[1807]: E0124 00:45:47.261785 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:47.271969 containerd[1480]: time="2026-01-24T00:45:47.269528907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpjkz,Uid:71ca35ef-dd98-4a7a-96f3-457de11743ce,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:47.305742 kubelet[1807]: E0124 00:45:47.305620 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:47.308580 containerd[1480]: time="2026-01-24T00:45:47.307559970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7scf7,Uid:91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:47.881447 kubelet[1807]: E0124 00:45:47.880979 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:48.281810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898785120.mount: Deactivated successfully. Jan 24 00:45:48.300030 containerd[1480]: time="2026-01-24T00:45:48.299645084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:48.309096 containerd[1480]: time="2026-01-24T00:45:48.306972078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:48.309096 containerd[1480]: time="2026-01-24T00:45:48.308477926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:45:48.317709 containerd[1480]: time="2026-01-24T00:45:48.316588893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:45:48.318768 containerd[1480]: time="2026-01-24T00:45:48.318518426Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:48.335950 containerd[1480]: time="2026-01-24T00:45:48.335013652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:45:48.339647 containerd[1480]: time="2026-01-24T00:45:48.339574099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.031805126s" Jan 24 00:45:48.341791 containerd[1480]: time="2026-01-24T00:45:48.340888610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.069238068s" Jan 24 00:45:48.885323 kubelet[1807]: E0124 00:45:48.882364 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:49.194347 containerd[1480]: time="2026-01-24T00:45:49.188962402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:49.194347 containerd[1480]: time="2026-01-24T00:45:49.189046359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:49.194347 containerd[1480]: time="2026-01-24T00:45:49.189066752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.196849 containerd[1480]: time="2026-01-24T00:45:49.195382578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.214098 containerd[1480]: time="2026-01-24T00:45:49.213553400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:49.214098 containerd[1480]: time="2026-01-24T00:45:49.213728438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:49.214098 containerd[1480]: time="2026-01-24T00:45:49.213755598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.214098 containerd[1480]: time="2026-01-24T00:45:49.213881763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:49.554711 systemd[1]: Started cri-containerd-9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d.scope - libcontainer container 9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d. Jan 24 00:45:49.571359 systemd[1]: Started cri-containerd-bcc8feff4a02d2679beeed2895bb9029bb141a2066197c1a9b371c7d097d6509.scope - libcontainer container bcc8feff4a02d2679beeed2895bb9029bb141a2066197c1a9b371c7d097d6509. Jan 24 00:45:50.179684 kubelet[1807]: E0124 00:45:50.176441 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:50.544951 containerd[1480]: time="2026-01-24T00:45:50.544829959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpjkz,Uid:71ca35ef-dd98-4a7a-96f3-457de11743ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\"" Jan 24 00:45:50.576511 kubelet[1807]: E0124 00:45:50.575686 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:50.577473 containerd[1480]: time="2026-01-24T00:45:50.575682993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7scf7,Uid:91ddc6ff-5197-4b82-8c84-91e5c3fa5ae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcc8feff4a02d2679beeed2895bb9029bb141a2066197c1a9b371c7d097d6509\"" Jan 24 00:45:50.577566 kubelet[1807]: E0124 00:45:50.577299 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:45:50.580392 containerd[1480]: time="2026-01-24T00:45:50.580299103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:45:51.189049 kubelet[1807]: E0124 00:45:51.188661 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:52.207382 kubelet[1807]: E0124 00:45:52.205668 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:53.207326 kubelet[1807]: E0124 00:45:53.206895 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:54.214501 kubelet[1807]: E0124 00:45:54.212492 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:55.221382 kubelet[1807]: E0124 00:45:55.217235 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:56.218875 kubelet[1807]: E0124 00:45:56.218583 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:57.220592 kubelet[1807]: E0124 00:45:57.220454 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:58.316767 kubelet[1807]: E0124 00:45:58.316559 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:45:59.381727 kubelet[1807]: E0124 00:45:59.381495 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:00.391760 kubelet[1807]: E0124 00:46:00.391278 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:01.396906 kubelet[1807]: E0124 00:46:01.396583 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:02.406604 kubelet[1807]: E0124 00:46:02.405638 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:03.407293 kubelet[1807]: E0124 00:46:03.406874 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:04.409597 kubelet[1807]: E0124 00:46:04.407251 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:04.872004 kubelet[1807]: E0124 00:46:04.871836 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:05.409685 kubelet[1807]: E0124 00:46:05.409330 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:06.409884 kubelet[1807]: E0124 00:46:06.409764 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:07.411497 kubelet[1807]: E0124 00:46:07.411363 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:07.481368 update_engine[1462]: I20260124 00:46:07.477244 1462 update_attempter.cc:509] Updating boot flags... Jan 24 00:46:07.597770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1956) Jan 24 00:46:07.850430 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1958) Jan 24 00:46:08.254968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334751740.mount: Deactivated successfully. Jan 24 00:46:08.412938 kubelet[1807]: E0124 00:46:08.412548 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:09.418677 kubelet[1807]: E0124 00:46:09.417628 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:10.421682 kubelet[1807]: E0124 00:46:10.421434 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:11.435368 kubelet[1807]: E0124 00:46:11.432375 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:12.433081 kubelet[1807]: E0124 00:46:12.432975 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:13.434261 kubelet[1807]: E0124 00:46:13.433934 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:14.437236 kubelet[1807]: E0124 00:46:14.435297 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:14.847379 containerd[1480]: time="2026-01-24T00:46:14.846329988Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:14.849821 containerd[1480]: time="2026-01-24T00:46:14.849696168Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:46:14.853738 containerd[1480]: time="2026-01-24T00:46:14.853580105Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:14.856853 containerd[1480]: time="2026-01-24T00:46:14.856700097Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.276320065s" Jan 24 00:46:14.856853 containerd[1480]: time="2026-01-24T00:46:14.856793614Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:46:14.859860 containerd[1480]: time="2026-01-24T00:46:14.859805582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:46:14.873000 containerd[1480]: time="2026-01-24T00:46:14.872562558Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:46:14.941802 containerd[1480]: time="2026-01-24T00:46:14.933600940Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\"" Jan 24 00:46:14.942609 containerd[1480]: time="2026-01-24T00:46:14.942569067Z" level=info msg="StartContainer for \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\"" Jan 24 00:46:15.067883 systemd[1]: Started cri-containerd-57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38.scope - libcontainer container 57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38. Jan 24 00:46:15.172009 containerd[1480]: time="2026-01-24T00:46:15.171884230Z" level=info msg="StartContainer for \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\" returns successfully" Jan 24 00:46:15.191231 systemd[1]: cri-containerd-57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38.scope: Deactivated successfully. Jan 24 00:46:15.222017 kubelet[1807]: E0124 00:46:15.220780 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:15.435681 kubelet[1807]: E0124 00:46:15.435526 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:15.435935 containerd[1480]: time="2026-01-24T00:46:15.435501575Z" level=info msg="shim disconnected" id=57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38 namespace=k8s.io Jan 24 00:46:15.435935 containerd[1480]: time="2026-01-24T00:46:15.435559946Z" level=warning msg="cleaning up after shim disconnected" id=57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38 namespace=k8s.io Jan 24 00:46:15.435935 containerd[1480]: time="2026-01-24T00:46:15.435576458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:15.910498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38-rootfs.mount: Deactivated successfully. Jan 24 00:46:16.227389 kubelet[1807]: E0124 00:46:16.226614 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:16.253545 containerd[1480]: time="2026-01-24T00:46:16.253435665Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:46:16.294036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826299735.mount: Deactivated successfully. Jan 24 00:46:16.311219 containerd[1480]: time="2026-01-24T00:46:16.310787823Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\"" Jan 24 00:46:16.314005 containerd[1480]: time="2026-01-24T00:46:16.312648929Z" level=info msg="StartContainer for \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\"" Jan 24 00:46:16.402979 systemd[1]: Started cri-containerd-21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da.scope - libcontainer container 21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da. Jan 24 00:46:16.436279 kubelet[1807]: E0124 00:46:16.436234 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:16.509178 containerd[1480]: time="2026-01-24T00:46:16.508904930Z" level=info msg="StartContainer for \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\" returns successfully" Jan 24 00:46:16.530858 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:46:16.531354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:16.531455 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:46:16.545967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:46:16.551714 systemd[1]: cri-containerd-21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da.scope: Deactivated successfully. Jan 24 00:46:16.597898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:16.706925 containerd[1480]: time="2026-01-24T00:46:16.706776084Z" level=info msg="shim disconnected" id=21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da namespace=k8s.io Jan 24 00:46:16.706925 containerd[1480]: time="2026-01-24T00:46:16.706925718Z" level=warning msg="cleaning up after shim disconnected" id=21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da namespace=k8s.io Jan 24 00:46:16.706925 containerd[1480]: time="2026-01-24T00:46:16.706938793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:16.740260 containerd[1480]: time="2026-01-24T00:46:16.739940411Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:46:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:46:16.910640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da-rootfs.mount: Deactivated successfully. Jan 24 00:46:17.122538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654107752.mount: Deactivated successfully. Jan 24 00:46:17.238500 kubelet[1807]: E0124 00:46:17.235969 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:17.251896 containerd[1480]: time="2026-01-24T00:46:17.251259034Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:46:17.316302 containerd[1480]: time="2026-01-24T00:46:17.315981613Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\"" Jan 24 00:46:17.319969 containerd[1480]: time="2026-01-24T00:46:17.319933891Z" level=info msg="StartContainer for \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\"" Jan 24 00:46:17.405856 systemd[1]: Started cri-containerd-1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e.scope - libcontainer container 1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e. Jan 24 00:46:17.438283 kubelet[1807]: E0124 00:46:17.437840 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:17.480351 containerd[1480]: time="2026-01-24T00:46:17.479963238Z" level=info msg="StartContainer for \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\" returns successfully" Jan 24 00:46:17.487579 systemd[1]: cri-containerd-1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e.scope: Deactivated successfully. Jan 24 00:46:17.641042 containerd[1480]: time="2026-01-24T00:46:17.639735914Z" level=info msg="shim disconnected" id=1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e namespace=k8s.io Jan 24 00:46:17.641042 containerd[1480]: time="2026-01-24T00:46:17.640306325Z" level=warning msg="cleaning up after shim disconnected" id=1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e namespace=k8s.io Jan 24 00:46:17.641042 containerd[1480]: time="2026-01-24T00:46:17.640320392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:17.912575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e-rootfs.mount: Deactivated successfully. Jan 24 00:46:18.326439 kubelet[1807]: E0124 00:46:18.324362 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:18.489859 containerd[1480]: time="2026-01-24T00:46:18.425077632Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:46:18.523810 kubelet[1807]: E0124 00:46:18.454751 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:18.689841 containerd[1480]: time="2026-01-24T00:46:18.688468747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:18.708993 containerd[1480]: time="2026-01-24T00:46:18.690851255Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:46:18.774978 containerd[1480]: time="2026-01-24T00:46:18.774743400Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:18.800548 containerd[1480]: time="2026-01-24T00:46:18.799801058Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\"" Jan 24 00:46:18.805322 containerd[1480]: time="2026-01-24T00:46:18.805021601Z" level=info msg="StartContainer for \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\"" Jan 24 00:46:18.805863 containerd[1480]: time="2026-01-24T00:46:18.805799280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:18.808293 containerd[1480]: time="2026-01-24T00:46:18.807399657Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.947556433s" Jan 24 00:46:18.808293 containerd[1480]: time="2026-01-24T00:46:18.807444943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:46:18.833609 containerd[1480]: time="2026-01-24T00:46:18.833334319Z" level=info msg="CreateContainer within sandbox \"bcc8feff4a02d2679beeed2895bb9029bb141a2066197c1a9b371c7d097d6509\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:46:19.032551 containerd[1480]: time="2026-01-24T00:46:19.032399300Z" level=info msg="CreateContainer within sandbox \"bcc8feff4a02d2679beeed2895bb9029bb141a2066197c1a9b371c7d097d6509\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e209e5258d18a24df06a6e3ae616fdd01a7af10e5718120b174b0e182dd7e71\"" Jan 24 00:46:19.044026 containerd[1480]: time="2026-01-24T00:46:19.043617838Z" level=info msg="StartContainer for \"8e209e5258d18a24df06a6e3ae616fdd01a7af10e5718120b174b0e182dd7e71\"" Jan 24 00:46:19.117827 systemd[1]: Started cri-containerd-1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e.scope - libcontainer container 1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e. Jan 24 00:46:19.295799 systemd[1]: Started cri-containerd-8e209e5258d18a24df06a6e3ae616fdd01a7af10e5718120b174b0e182dd7e71.scope - libcontainer container 8e209e5258d18a24df06a6e3ae616fdd01a7af10e5718120b174b0e182dd7e71. Jan 24 00:46:19.376768 systemd[1]: cri-containerd-1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e.scope: Deactivated successfully. Jan 24 00:46:19.384288 containerd[1480]: time="2026-01-24T00:46:19.382719621Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice/cri-containerd-1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e.scope/memory.events\": no such file or directory" Jan 24 00:46:19.421838 containerd[1480]: time="2026-01-24T00:46:19.421609072Z" level=info msg="StartContainer for \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\" returns successfully" Jan 24 00:46:19.513747 containerd[1480]: time="2026-01-24T00:46:19.512842225Z" level=info msg="StartContainer for \"8e209e5258d18a24df06a6e3ae616fdd01a7af10e5718120b174b0e182dd7e71\" returns successfully" Jan 24 00:46:19.524393 kubelet[1807]: E0124 00:46:19.507960 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:19.636719 kubelet[1807]: E0124 00:46:19.636307 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:19.883355 containerd[1480]: time="2026-01-24T00:46:19.882902266Z" level=info msg="shim disconnected" id=1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e namespace=k8s.io Jan 24 00:46:19.883355 containerd[1480]: time="2026-01-24T00:46:19.883076696Z" level=warning msg="cleaning up after shim disconnected" id=1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e namespace=k8s.io Jan 24 00:46:19.883355 containerd[1480]: time="2026-01-24T00:46:19.883092605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:46:19.981649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e-rootfs.mount: Deactivated successfully. Jan 24 00:46:20.526609 kubelet[1807]: E0124 00:46:20.522559 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:21.410455 kubelet[1807]: E0124 00:46:21.403513 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:21.527699 kubelet[1807]: E0124 00:46:21.524369 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:21.846886 kubelet[1807]: E0124 00:46:21.846589 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:22.225737 containerd[1480]: time="2026-01-24T00:46:22.223057290Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:46:22.648268 kubelet[1807]: E0124 00:46:22.644523 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:22.707047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509424943.mount: Deactivated successfully. Jan 24 00:46:22.737308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757375644.mount: Deactivated successfully. Jan 24 00:46:22.866291 kubelet[1807]: E0124 00:46:22.865977 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:22.895044 containerd[1480]: time="2026-01-24T00:46:22.894656465Z" level=info msg="CreateContainer within sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\"" Jan 24 00:46:22.911679 containerd[1480]: time="2026-01-24T00:46:22.911238019Z" level=info msg="StartContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\"" Jan 24 00:46:23.711722 kubelet[1807]: E0124 00:46:23.709626 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:23.798544 kubelet[1807]: I0124 00:46:23.711971 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7scf7" podStartSLOduration=10.481250653 podStartE2EDuration="38.711952527s" podCreationTimestamp="2026-01-24 00:45:45 +0000 UTC" firstStartedPulling="2026-01-24 00:45:50.578797124 +0000 UTC m=+7.217529197" lastFinishedPulling="2026-01-24 00:46:18.809499008 +0000 UTC m=+35.448231071" observedRunningTime="2026-01-24 00:46:23.399778857 +0000 UTC m=+40.038510940" watchObservedRunningTime="2026-01-24 00:46:23.711952527 +0000 UTC m=+40.350684610" Jan 24 00:46:24.743258 kubelet[1807]: E0124 00:46:24.732351 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:25.343719 kubelet[1807]: E0124 00:46:24.887488 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:25.344096 systemd[1]: Started cri-containerd-73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2.scope - libcontainer container 73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2. Jan 24 00:46:25.629843 containerd[1480]: time="2026-01-24T00:46:25.629050414Z" level=info msg="StartContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" returns successfully" Jan 24 00:46:25.743740 kubelet[1807]: E0124 00:46:25.743578 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:25.987592 kubelet[1807]: I0124 00:46:25.987295 1807 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:46:26.710353 kernel: Initializing XFRM netlink socket Jan 24 00:46:26.718229 kubelet[1807]: E0124 00:46:26.716854 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:26.746320 kubelet[1807]: E0124 00:46:26.745999 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:26.767232 kubelet[1807]: I0124 00:46:26.766900 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gpjkz" podStartSLOduration=17.486983354 podStartE2EDuration="41.766822111s" podCreationTimestamp="2026-01-24 00:45:45 +0000 UTC" firstStartedPulling="2026-01-24 00:45:50.578575829 +0000 UTC m=+7.217307902" lastFinishedPulling="2026-01-24 00:46:14.858414597 +0000 UTC m=+31.497146659" observedRunningTime="2026-01-24 00:46:26.763543303 +0000 UTC m=+43.402275397" watchObservedRunningTime="2026-01-24 00:46:26.766822111 +0000 UTC m=+43.405554175" Jan 24 00:46:27.725916 kubelet[1807]: E0124 00:46:27.725628 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:27.749197 kubelet[1807]: E0124 00:46:27.748864 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:28.767391 kubelet[1807]: E0124 00:46:28.763787 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:28.772604 systemd-networkd[1402]: cilium_host: Link UP Jan 24 00:46:28.772880 systemd-networkd[1402]: cilium_net: Link UP Jan 24 00:46:28.772886 systemd-networkd[1402]: cilium_net: Gained carrier Jan 24 00:46:28.773290 systemd-networkd[1402]: cilium_host: Gained carrier Jan 24 00:46:28.896801 systemd-networkd[1402]: cilium_host: Gained IPv6LL Jan 24 00:46:29.278923 systemd-networkd[1402]: cilium_net: Gained IPv6LL Jan 24 00:46:29.375090 systemd-networkd[1402]: cilium_vxlan: Link UP Jan 24 00:46:29.375191 systemd-networkd[1402]: cilium_vxlan: Gained carrier Jan 24 00:46:29.766966 kubelet[1807]: E0124 00:46:29.766079 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:30.743413 kernel: NET: Registered PF_ALG protocol family Jan 24 00:46:30.764247 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Jan 24 00:46:30.766837 kubelet[1807]: E0124 00:46:30.766784 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:30.983893 systemd[1]: Created slice kubepods-besteffort-pod003eefc4_92ed_4393_8469_0c1e849f0b9c.slice - libcontainer container kubepods-besteffort-pod003eefc4_92ed_4393_8469_0c1e849f0b9c.slice. Jan 24 00:46:31.087322 kubelet[1807]: I0124 00:46:31.080701 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgqw\" (UniqueName: \"kubernetes.io/projected/003eefc4-92ed-4393-8469-0c1e849f0b9c-kube-api-access-sbgqw\") pod \"nginx-deployment-bb8f74bfb-ds94c\" (UID: \"003eefc4-92ed-4393-8469-0c1e849f0b9c\") " pod="default/nginx-deployment-bb8f74bfb-ds94c" Jan 24 00:46:31.617209 containerd[1480]: time="2026-01-24T00:46:31.616842493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-ds94c,Uid:003eefc4-92ed-4393-8469-0c1e849f0b9c,Namespace:default,Attempt:0,}" Jan 24 00:46:31.884230 kubelet[1807]: E0124 00:46:31.879572 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:32.885341 kubelet[1807]: E0124 00:46:32.885038 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:33.672773 systemd-networkd[1402]: lxc_health: Link UP Jan 24 00:46:33.681767 systemd-networkd[1402]: lxc_health: Gained carrier Jan 24 00:46:33.886328 kubelet[1807]: E0124 00:46:33.885981 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:34.318540 systemd-networkd[1402]: lxc105e94316abc: Link UP Jan 24 00:46:34.332360 kernel: eth0: renamed from tmp20246 Jan 24 00:46:34.345361 systemd-networkd[1402]: lxc105e94316abc: Gained carrier Jan 24 00:46:34.887468 kubelet[1807]: E0124 00:46:34.887407 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:35.166784 systemd-networkd[1402]: lxc_health: Gained IPv6LL Jan 24 00:46:35.257460 kubelet[1807]: E0124 00:46:35.257351 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:35.889918 kubelet[1807]: E0124 00:46:35.889700 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:35.922802 kubelet[1807]: E0124 00:46:35.922479 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:46:35.934543 systemd-networkd[1402]: lxc105e94316abc: Gained IPv6LL Jan 24 00:46:36.890914 kubelet[1807]: E0124 00:46:36.890490 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:37.891439 kubelet[1807]: E0124 00:46:37.891339 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:38.927508 kubelet[1807]: E0124 00:46:38.922592 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:40.127273 kubelet[1807]: E0124 00:46:40.126627 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:41.199545 kubelet[1807]: E0124 00:46:41.198366 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:42.202515 kubelet[1807]: E0124 00:46:42.202223 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:42.441577 containerd[1480]: time="2026-01-24T00:46:42.440098638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:42.441577 containerd[1480]: time="2026-01-24T00:46:42.441391730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:42.441577 containerd[1480]: time="2026-01-24T00:46:42.441413131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:42.441577 containerd[1480]: time="2026-01-24T00:46:42.441569325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:42.545642 systemd[1]: Started cri-containerd-20246624fae2bd33ba7564a023d4e0e556e9be6104c00d07b327d37ad81f18ad.scope - libcontainer container 20246624fae2bd33ba7564a023d4e0e556e9be6104c00d07b327d37ad81f18ad. Jan 24 00:46:42.611867 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:46:42.680347 containerd[1480]: time="2026-01-24T00:46:42.680018733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-ds94c,Uid:003eefc4-92ed-4393-8469-0c1e849f0b9c,Namespace:default,Attempt:0,} returns sandbox id \"20246624fae2bd33ba7564a023d4e0e556e9be6104c00d07b327d37ad81f18ad\"" Jan 24 00:46:42.683977 containerd[1480]: time="2026-01-24T00:46:42.683523075Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:46:43.203942 kubelet[1807]: E0124 00:46:43.203579 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:44.240903 kubelet[1807]: E0124 00:46:44.238784 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:44.919798 kubelet[1807]: E0124 00:46:44.919668 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:45.242319 kubelet[1807]: E0124 00:46:45.242281 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:46.248916 kubelet[1807]: E0124 00:46:46.246384 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:47.254391 kubelet[1807]: E0124 00:46:47.254267 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:47.297647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820628883.mount: Deactivated successfully. Jan 24 00:46:48.255630 kubelet[1807]: E0124 00:46:48.255481 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:48.723252 containerd[1480]: time="2026-01-24T00:46:48.722871531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:48.724473 containerd[1480]: time="2026-01-24T00:46:48.724375240Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 24 00:46:48.725575 containerd[1480]: time="2026-01-24T00:46:48.725497769Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:48.730532 containerd[1480]: time="2026-01-24T00:46:48.730453838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:46:48.731954 containerd[1480]: time="2026-01-24T00:46:48.731800990Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 6.04819042s" Jan 24 00:46:48.731954 containerd[1480]: time="2026-01-24T00:46:48.731864860Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:46:48.740421 containerd[1480]: time="2026-01-24T00:46:48.740298123Z" level=info msg="CreateContainer within sandbox \"20246624fae2bd33ba7564a023d4e0e556e9be6104c00d07b327d37ad81f18ad\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 24 00:46:48.761943 containerd[1480]: time="2026-01-24T00:46:48.761869586Z" level=info msg="CreateContainer within sandbox \"20246624fae2bd33ba7564a023d4e0e556e9be6104c00d07b327d37ad81f18ad\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b9079f1da25ca9f2c98f18f002b3388ef91b251d437faf4d295fa995dd1e1216\"" Jan 24 00:46:48.763082 containerd[1480]: time="2026-01-24T00:46:48.762979506Z" level=info msg="StartContainer for \"b9079f1da25ca9f2c98f18f002b3388ef91b251d437faf4d295fa995dd1e1216\"" Jan 24 00:46:48.818480 systemd[1]: Started cri-containerd-b9079f1da25ca9f2c98f18f002b3388ef91b251d437faf4d295fa995dd1e1216.scope - libcontainer container b9079f1da25ca9f2c98f18f002b3388ef91b251d437faf4d295fa995dd1e1216. Jan 24 00:46:48.877705 containerd[1480]: time="2026-01-24T00:46:48.877537815Z" level=info msg="StartContainer for \"b9079f1da25ca9f2c98f18f002b3388ef91b251d437faf4d295fa995dd1e1216\" returns successfully" Jan 24 00:46:49.256419 kubelet[1807]: E0124 00:46:49.256217 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:49.413085 kubelet[1807]: I0124 00:46:49.412727 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-ds94c" podStartSLOduration=13.362075931 podStartE2EDuration="19.412662747s" podCreationTimestamp="2026-01-24 00:46:30 +0000 UTC" firstStartedPulling="2026-01-24 00:46:42.682767955 +0000 UTC m=+59.321500038" lastFinishedPulling="2026-01-24 00:46:48.73335479 +0000 UTC m=+65.372086854" observedRunningTime="2026-01-24 00:46:49.412476249 +0000 UTC m=+66.051208332" watchObservedRunningTime="2026-01-24 00:46:49.412662747 +0000 UTC m=+66.051394830" Jan 24 00:46:50.257251 kubelet[1807]: E0124 00:46:50.256929 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:51.257429 kubelet[1807]: E0124 00:46:51.257293 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:52.257990 kubelet[1807]: E0124 00:46:52.257849 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:53.259349 kubelet[1807]: E0124 00:46:53.259077 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:53.695081 systemd[1]: Created slice kubepods-besteffort-pod1af2bf66_33d9_4d1e_b9ba_30fe9fac07c3.slice - libcontainer container kubepods-besteffort-pod1af2bf66_33d9_4d1e_b9ba_30fe9fac07c3.slice. Jan 24 00:46:53.800043 kubelet[1807]: I0124 00:46:53.799852 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3-data\") pod \"nfs-server-provisioner-0\" (UID: \"1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3\") " pod="default/nfs-server-provisioner-0" Jan 24 00:46:53.800043 kubelet[1807]: I0124 00:46:53.799932 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlqk\" (UniqueName: \"kubernetes.io/projected/1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3-kube-api-access-cqlqk\") pod \"nfs-server-provisioner-0\" (UID: \"1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3\") " pod="default/nfs-server-provisioner-0" Jan 24 00:46:54.005078 containerd[1480]: time="2026-01-24T00:46:54.004986224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3,Namespace:default,Attempt:0,}" Jan 24 00:46:54.060445 systemd-networkd[1402]: lxca241b6b59a87: Link UP Jan 24 00:46:54.072333 kernel: eth0: renamed from tmp887ab Jan 24 00:46:54.081008 systemd-networkd[1402]: lxca241b6b59a87: Gained carrier Jan 24 00:46:54.260419 kubelet[1807]: E0124 00:46:54.260063 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:54.365293 containerd[1480]: time="2026-01-24T00:46:54.364913414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:46:54.365544 containerd[1480]: time="2026-01-24T00:46:54.365307595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:46:54.365544 containerd[1480]: time="2026-01-24T00:46:54.365339024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:54.366657 containerd[1480]: time="2026-01-24T00:46:54.366467535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:46:54.409580 systemd[1]: Started cri-containerd-887ab42a528bf4a4d797f422efbb7ba2afb182cc791339919db74b11aaf6a88f.scope - libcontainer container 887ab42a528bf4a4d797f422efbb7ba2afb182cc791339919db74b11aaf6a88f. Jan 24 00:46:54.428331 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:46:54.469341 containerd[1480]: time="2026-01-24T00:46:54.469270386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1af2bf66-33d9-4d1e-b9ba-30fe9fac07c3,Namespace:default,Attempt:0,} returns sandbox id \"887ab42a528bf4a4d797f422efbb7ba2afb182cc791339919db74b11aaf6a88f\"" Jan 24 00:46:54.472666 containerd[1480]: time="2026-01-24T00:46:54.472489799Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 24 00:46:55.262565 kubelet[1807]: E0124 00:46:55.262491 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:55.264432 systemd-networkd[1402]: lxca241b6b59a87: Gained IPv6LL Jan 24 00:46:56.263043 kubelet[1807]: E0124 00:46:56.262877 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:57.265351 kubelet[1807]: E0124 00:46:57.265032 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:58.846216 kubelet[1807]: E0124 00:46:58.840058 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:46:59.848461 kubelet[1807]: E0124 00:46:59.846365 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:00.894960 kubelet[1807]: E0124 00:47:00.894544 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:01.921806 kubelet[1807]: E0124 00:47:01.919660 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:02.929686 kubelet[1807]: E0124 00:47:02.928975 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:03.977244 kubelet[1807]: E0124 00:47:03.953843 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:04.993633 kubelet[1807]: E0124 00:47:04.978367 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:05.042061 kubelet[1807]: E0124 00:47:04.977982 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:05.290417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457135827.mount: Deactivated successfully. Jan 24 00:47:06.112600 kubelet[1807]: E0124 00:47:06.110728 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:07.116401 kubelet[1807]: E0124 00:47:07.115546 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:08.121281 kubelet[1807]: E0124 00:47:08.118992 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:09.123663 kubelet[1807]: E0124 00:47:09.123435 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:10.126515 kubelet[1807]: E0124 00:47:10.125967 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:11.129558 kubelet[1807]: E0124 00:47:11.129324 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:12.130250 kubelet[1807]: E0124 00:47:12.130043 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:12.486859 containerd[1480]: time="2026-01-24T00:47:12.484729265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:12.490054 containerd[1480]: time="2026-01-24T00:47:12.489870341Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 24 00:47:12.492284 containerd[1480]: time="2026-01-24T00:47:12.492205845Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:12.501396 containerd[1480]: time="2026-01-24T00:47:12.501283950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:12.503562 containerd[1480]: time="2026-01-24T00:47:12.503464926Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 18.030900929s" Jan 24 00:47:12.503562 containerd[1480]: time="2026-01-24T00:47:12.503546827Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 24 00:47:12.530275 containerd[1480]: time="2026-01-24T00:47:12.529889284Z" level=info msg="CreateContainer within sandbox \"887ab42a528bf4a4d797f422efbb7ba2afb182cc791339919db74b11aaf6a88f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 24 00:47:12.570514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622669267.mount: Deactivated successfully. Jan 24 00:47:12.583440 containerd[1480]: time="2026-01-24T00:47:12.583339390Z" level=info msg="CreateContainer within sandbox \"887ab42a528bf4a4d797f422efbb7ba2afb182cc791339919db74b11aaf6a88f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"87837c75547e02c80c6cf359e2e3b2824d641e772027fa20ccc9b91736a65671\"" Jan 24 00:47:12.585233 containerd[1480]: time="2026-01-24T00:47:12.584918072Z" level=info msg="StartContainer for \"87837c75547e02c80c6cf359e2e3b2824d641e772027fa20ccc9b91736a65671\"" Jan 24 00:47:12.865448 systemd[1]: Started cri-containerd-87837c75547e02c80c6cf359e2e3b2824d641e772027fa20ccc9b91736a65671.scope - libcontainer container 87837c75547e02c80c6cf359e2e3b2824d641e772027fa20ccc9b91736a65671. Jan 24 00:47:12.935097 containerd[1480]: time="2026-01-24T00:47:12.934935389Z" level=info msg="StartContainer for \"87837c75547e02c80c6cf359e2e3b2824d641e772027fa20ccc9b91736a65671\" returns successfully" Jan 24 00:47:13.134407 kubelet[1807]: E0124 00:47:13.133823 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:13.912218 kubelet[1807]: I0124 00:47:13.911683 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.872612475 podStartE2EDuration="20.911587579s" podCreationTimestamp="2026-01-24 00:46:53 +0000 UTC" firstStartedPulling="2026-01-24 00:46:54.471921682 +0000 UTC m=+71.110653745" lastFinishedPulling="2026-01-24 00:47:12.510896786 +0000 UTC m=+89.149628849" observedRunningTime="2026-01-24 00:47:13.910798999 +0000 UTC m=+90.549531102" watchObservedRunningTime="2026-01-24 00:47:13.911587579 +0000 UTC m=+90.550319642" Jan 24 00:47:14.135891 kubelet[1807]: E0124 00:47:14.135497 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:15.137300 kubelet[1807]: E0124 00:47:15.136949 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:16.138326 kubelet[1807]: E0124 00:47:16.137880 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:17.139770 kubelet[1807]: E0124 00:47:17.139337 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:18.140998 kubelet[1807]: E0124 00:47:18.140235 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:18.705908 systemd[1]: Created slice kubepods-besteffort-pod7a86b0af_7dbb_4f24_a7fd_f43c2f3c32e5.slice - libcontainer container kubepods-besteffort-pod7a86b0af_7dbb_4f24_a7fd_f43c2f3c32e5.slice. Jan 24 00:47:18.855778 kubelet[1807]: I0124 00:47:18.849969 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c557d7e7-ce0d-428a-ad6b-875898071cf6\" (UniqueName: \"kubernetes.io/nfs/7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5-pvc-c557d7e7-ce0d-428a-ad6b-875898071cf6\") pod \"test-pod-1\" (UID: \"7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5\") " pod="default/test-pod-1" Jan 24 00:47:18.855778 kubelet[1807]: I0124 00:47:18.850772 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klfc8\" (UniqueName: \"kubernetes.io/projected/7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5-kube-api-access-klfc8\") pod \"test-pod-1\" (UID: \"7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5\") " pod="default/test-pod-1" Jan 24 00:47:19.141246 kubelet[1807]: E0124 00:47:19.141067 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:19.228858 kernel: FS-Cache: Loaded Jan 24 00:47:19.456355 kernel: RPC: Registered named UNIX socket transport module. Jan 24 00:47:19.456502 kernel: RPC: Registered udp transport module. Jan 24 00:47:19.456541 kernel: RPC: Registered tcp transport module. Jan 24 00:47:19.473039 kernel: RPC: Registered tcp-with-tls transport module. Jan 24 00:47:19.473302 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 24 00:47:20.055033 kernel: NFS: Registering the id_resolver key type Jan 24 00:47:20.055521 kernel: Key type id_resolver registered Jan 24 00:47:20.055571 kernel: Key type id_legacy registered Jan 24 00:47:20.143219 kubelet[1807]: E0124 00:47:20.142895 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:20.201568 nfsidmap[3238]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 00:47:20.223020 nfsidmap[3241]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 00:47:20.574335 containerd[1480]: time="2026-01-24T00:47:20.573500093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5,Namespace:default,Attempt:0,}" Jan 24 00:47:20.707065 systemd-networkd[1402]: lxc17dffc1fb0a7: Link UP Jan 24 00:47:20.721678 kernel: eth0: renamed from tmp0fac2 Jan 24 00:47:20.731792 systemd-networkd[1402]: lxc17dffc1fb0a7: Gained carrier Jan 24 00:47:21.145192 kubelet[1807]: E0124 00:47:21.144774 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:21.322322 containerd[1480]: time="2026-01-24T00:47:21.321505947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:47:21.324559 containerd[1480]: time="2026-01-24T00:47:21.322740862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:47:21.328416 containerd[1480]: time="2026-01-24T00:47:21.327415155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:21.328416 containerd[1480]: time="2026-01-24T00:47:21.327727821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:21.406401 systemd[1]: run-containerd-runc-k8s.io-0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd-runc.cmpaoi.mount: Deactivated successfully. Jan 24 00:47:21.424674 systemd[1]: Started cri-containerd-0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd.scope - libcontainer container 0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd. Jan 24 00:47:21.448088 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:47:21.538006 containerd[1480]: time="2026-01-24T00:47:21.537834264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7a86b0af-7dbb-4f24-a7fd-f43c2f3c32e5,Namespace:default,Attempt:0,} returns sandbox id \"0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd\"" Jan 24 00:47:21.541038 containerd[1480]: time="2026-01-24T00:47:21.540912619Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:47:21.700317 containerd[1480]: time="2026-01-24T00:47:21.699969163Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:21.707995 containerd[1480]: time="2026-01-24T00:47:21.707760589Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 24 00:47:21.715554 containerd[1480]: time="2026-01-24T00:47:21.715062973Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 174.076107ms" Jan 24 00:47:21.715554 containerd[1480]: time="2026-01-24T00:47:21.715239217Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:47:21.732309 containerd[1480]: time="2026-01-24T00:47:21.732213503Z" level=info msg="CreateContainer within sandbox \"0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 24 00:47:21.774984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164719124.mount: Deactivated successfully. Jan 24 00:47:21.781514 containerd[1480]: time="2026-01-24T00:47:21.781363483Z" level=info msg="CreateContainer within sandbox \"0fac26ca276f369e2b2c69bc9ea2e4e08ae62c098f6e5ec7e19c9e6d2c0eb1fd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bdf1d0e18094f27d6d84b33dc9d0e6ee3c0168f1b93f615cd981a35b458d94f9\"" Jan 24 00:47:21.783203 containerd[1480]: time="2026-01-24T00:47:21.783050810Z" level=info msg="StartContainer for \"bdf1d0e18094f27d6d84b33dc9d0e6ee3c0168f1b93f615cd981a35b458d94f9\"" Jan 24 00:47:21.822846 systemd-networkd[1402]: lxc17dffc1fb0a7: Gained IPv6LL Jan 24 00:47:21.910440 systemd[1]: Started cri-containerd-bdf1d0e18094f27d6d84b33dc9d0e6ee3c0168f1b93f615cd981a35b458d94f9.scope - libcontainer container bdf1d0e18094f27d6d84b33dc9d0e6ee3c0168f1b93f615cd981a35b458d94f9. Jan 24 00:47:21.987567 containerd[1480]: time="2026-01-24T00:47:21.987412974Z" level=info msg="StartContainer for \"bdf1d0e18094f27d6d84b33dc9d0e6ee3c0168f1b93f615cd981a35b458d94f9\" returns successfully" Jan 24 00:47:22.146089 kubelet[1807]: E0124 00:47:22.145554 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:23.146831 kubelet[1807]: E0124 00:47:23.146519 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:24.147998 kubelet[1807]: E0124 00:47:24.147829 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:24.836050 kubelet[1807]: I0124 00:47:24.833685 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=30.6532698 podStartE2EDuration="30.833658788s" podCreationTimestamp="2026-01-24 00:46:54 +0000 UTC" firstStartedPulling="2026-01-24 00:47:21.539643453 +0000 UTC m=+98.178375516" lastFinishedPulling="2026-01-24 00:47:21.72003244 +0000 UTC m=+98.358764504" observedRunningTime="2026-01-24 00:47:23.02406636 +0000 UTC m=+99.662798423" watchObservedRunningTime="2026-01-24 00:47:24.833658788 +0000 UTC m=+101.472390851" Jan 24 00:47:24.873039 kubelet[1807]: E0124 00:47:24.872992 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:24.933101 containerd[1480]: time="2026-01-24T00:47:24.932908571Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:47:24.961259 containerd[1480]: time="2026-01-24T00:47:24.960613326Z" level=info msg="StopContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" with timeout 2 (s)" Jan 24 00:47:24.961259 containerd[1480]: time="2026-01-24T00:47:24.961213893Z" level=info msg="Stop container \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" with signal terminated" Jan 24 00:47:24.976395 systemd-networkd[1402]: lxc_health: Link DOWN Jan 24 00:47:24.976407 systemd-networkd[1402]: lxc_health: Lost carrier Jan 24 00:47:25.000976 systemd[1]: cri-containerd-73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2.scope: Deactivated successfully. Jan 24 00:47:25.003260 systemd[1]: cri-containerd-73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2.scope: Consumed 20.346s CPU time. Jan 24 00:47:25.056443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2-rootfs.mount: Deactivated successfully. Jan 24 00:47:25.082088 containerd[1480]: time="2026-01-24T00:47:25.081881311Z" level=info msg="shim disconnected" id=73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2 namespace=k8s.io Jan 24 00:47:25.082088 containerd[1480]: time="2026-01-24T00:47:25.081948094Z" level=warning msg="cleaning up after shim disconnected" id=73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2 namespace=k8s.io Jan 24 00:47:25.082088 containerd[1480]: time="2026-01-24T00:47:25.081960817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:25.117329 containerd[1480]: time="2026-01-24T00:47:25.116902515Z" level=info msg="StopContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" returns successfully" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118320581Z" level=info msg="StopPodSandbox for \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\"" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118416528Z" level=info msg="Container to stop \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118439590Z" level=info msg="Container to stop \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118457704Z" level=info msg="Container to stop \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118475748Z" level=info msg="Container to stop \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:47:25.118635 containerd[1480]: time="2026-01-24T00:47:25.118489803Z" level=info msg="Container to stop \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:47:25.122955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d-shm.mount: Deactivated successfully. Jan 24 00:47:25.138352 systemd[1]: cri-containerd-9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d.scope: Deactivated successfully. Jan 24 00:47:25.148343 kubelet[1807]: E0124 00:47:25.148264 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:25.195626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d-rootfs.mount: Deactivated successfully. Jan 24 00:47:25.208093 containerd[1480]: time="2026-01-24T00:47:25.207839963Z" level=info msg="shim disconnected" id=9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d namespace=k8s.io Jan 24 00:47:25.208093 containerd[1480]: time="2026-01-24T00:47:25.207890165Z" level=warning msg="cleaning up after shim disconnected" id=9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d namespace=k8s.io Jan 24 00:47:25.208093 containerd[1480]: time="2026-01-24T00:47:25.207900023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:25.232299 containerd[1480]: time="2026-01-24T00:47:25.232049055Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:47:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:47:25.234396 containerd[1480]: time="2026-01-24T00:47:25.234274242Z" level=info msg="TearDown network for sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" successfully" Jan 24 00:47:25.234396 containerd[1480]: time="2026-01-24T00:47:25.234367584Z" level=info msg="StopPodSandbox for \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" returns successfully" Jan 24 00:47:25.390264 kubelet[1807]: I0124 00:47:25.389955 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cni-path\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390264 kubelet[1807]: I0124 00:47:25.389952 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.390264 kubelet[1807]: I0124 00:47:25.390091 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-xtables-lock\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390264 kubelet[1807]: I0124 00:47:25.390220 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71ca35ef-dd98-4a7a-96f3-457de11743ce-clustermesh-secrets\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390264 kubelet[1807]: I0124 00:47:25.390243 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390261 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-cgroup\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390298 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390342 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-run\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390421 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-hostproc\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390455 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-net\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390621 kubelet[1807]: I0124 00:47:25.390481 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-etc-cni-netd\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390511 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-config-path\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390605 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-hubble-tls\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390638 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-bpf-maps\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390660 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-lib-modules\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390685 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-kernel\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390779 kubelet[1807]: I0124 00:47:25.390716 1807 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffprh\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-kube-api-access-ffprh\") pod \"71ca35ef-dd98-4a7a-96f3-457de11743ce\" (UID: \"71ca35ef-dd98-4a7a-96f3-457de11743ce\") " Jan 24 00:47:25.390904 kubelet[1807]: I0124 00:47:25.390778 1807 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cni-path\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.390904 kubelet[1807]: I0124 00:47:25.390795 1807 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-xtables-lock\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.390904 kubelet[1807]: I0124 00:47:25.390813 1807 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-cgroup\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.391983 kubelet[1807]: I0124 00:47:25.391737 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.391983 kubelet[1807]: I0124 00:47:25.391779 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.391983 kubelet[1807]: I0124 00:47:25.391804 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.392084 kubelet[1807]: I0124 00:47:25.391982 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.395475 kubelet[1807]: I0124 00:47:25.395356 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.395475 kubelet[1807]: I0124 00:47:25.395439 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.395475 kubelet[1807]: I0124 00:47:25.395470 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:47:25.397799 kubelet[1807]: I0124 00:47:25.397600 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ca35ef-dd98-4a7a-96f3-457de11743ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:47:25.398819 systemd[1]: var-lib-kubelet-pods-71ca35ef\x2ddd98\x2d4a7a\x2d96f3\x2d457de11743ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 00:47:25.399388 kubelet[1807]: I0124 00:47:25.398956 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:47:25.400678 kubelet[1807]: I0124 00:47:25.400522 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-kube-api-access-ffprh" (OuterVolumeSpecName: "kube-api-access-ffprh") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "kube-api-access-ffprh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:47:25.400678 kubelet[1807]: I0124 00:47:25.400624 1807 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "71ca35ef-dd98-4a7a-96f3-457de11743ce" (UID: "71ca35ef-dd98-4a7a-96f3-457de11743ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491086 1807 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-bpf-maps\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491220 1807 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-lib-modules\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491232 1807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-kernel\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491242 1807 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffprh\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-kube-api-access-ffprh\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491251 1807 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71ca35ef-dd98-4a7a-96f3-457de11743ce-clustermesh-secrets\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491258 1807 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-run\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491265 1807 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-hostproc\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491298 kubelet[1807]: I0124 00:47:25.491272 1807 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-host-proc-sys-net\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491782 kubelet[1807]: I0124 00:47:25.491279 1807 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71ca35ef-dd98-4a7a-96f3-457de11743ce-etc-cni-netd\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491782 kubelet[1807]: I0124 00:47:25.491287 1807 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71ca35ef-dd98-4a7a-96f3-457de11743ce-cilium-config-path\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.491782 kubelet[1807]: I0124 00:47:25.491294 1807 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71ca35ef-dd98-4a7a-96f3-457de11743ce-hubble-tls\") on node \"10.0.0.146\" DevicePath \"\"" Jan 24 00:47:25.646337 kubelet[1807]: E0124 00:47:25.646016 1807 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:47:25.898698 systemd[1]: var-lib-kubelet-pods-71ca35ef\x2ddd98\x2d4a7a\x2d96f3\x2d457de11743ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffprh.mount: Deactivated successfully. Jan 24 00:47:25.898847 systemd[1]: var-lib-kubelet-pods-71ca35ef\x2ddd98\x2d4a7a\x2d96f3\x2d457de11743ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 00:47:25.962746 kubelet[1807]: I0124 00:47:25.962666 1807 scope.go:117] "RemoveContainer" containerID="73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2" Jan 24 00:47:25.965642 containerd[1480]: time="2026-01-24T00:47:25.964699587Z" level=info msg="RemoveContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\"" Jan 24 00:47:25.972036 systemd[1]: Removed slice kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice - libcontainer container kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice. Jan 24 00:47:25.972275 systemd[1]: kubepods-burstable-pod71ca35ef_dd98_4a7a_96f3_457de11743ce.slice: Consumed 21.069s CPU time. Jan 24 00:47:25.972830 containerd[1480]: time="2026-01-24T00:47:25.972575431Z" level=info msg="RemoveContainer for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" returns successfully" Jan 24 00:47:25.973302 kubelet[1807]: I0124 00:47:25.973269 1807 scope.go:117] "RemoveContainer" containerID="1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e" Jan 24 00:47:25.975817 containerd[1480]: time="2026-01-24T00:47:25.975747596Z" level=info msg="RemoveContainer for \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\"" Jan 24 00:47:25.981984 containerd[1480]: time="2026-01-24T00:47:25.981885191Z" level=info msg="RemoveContainer for \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\" returns successfully" Jan 24 00:47:25.982644 kubelet[1807]: I0124 00:47:25.982429 1807 scope.go:117] "RemoveContainer" containerID="1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e" Jan 24 00:47:25.984239 containerd[1480]: time="2026-01-24T00:47:25.984088111Z" level=info msg="RemoveContainer for \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\"" Jan 24 00:47:25.990011 containerd[1480]: time="2026-01-24T00:47:25.989830927Z" level=info msg="RemoveContainer for \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\" returns successfully" Jan 24 00:47:25.990710 kubelet[1807]: I0124 00:47:25.990598 1807 scope.go:117] "RemoveContainer" containerID="21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da" Jan 24 00:47:25.993894 containerd[1480]: time="2026-01-24T00:47:25.993803831Z" level=info msg="RemoveContainer for \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\"" Jan 24 00:47:26.001843 containerd[1480]: time="2026-01-24T00:47:26.001636792Z" level=info msg="RemoveContainer for \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\" returns successfully" Jan 24 00:47:26.002265 kubelet[1807]: I0124 00:47:26.002067 1807 scope.go:117] "RemoveContainer" containerID="57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38" Jan 24 00:47:26.004637 containerd[1480]: time="2026-01-24T00:47:26.004428982Z" level=info msg="RemoveContainer for \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\"" Jan 24 00:47:26.012215 containerd[1480]: time="2026-01-24T00:47:26.011991589Z" level=info msg="RemoveContainer for \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\" returns successfully" Jan 24 00:47:26.012427 kubelet[1807]: I0124 00:47:26.012388 1807 scope.go:117] "RemoveContainer" containerID="73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2" Jan 24 00:47:26.013007 containerd[1480]: time="2026-01-24T00:47:26.012729421Z" level=error msg="ContainerStatus for \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\": not found" Jan 24 00:47:26.013423 kubelet[1807]: E0124 00:47:26.013290 1807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\": not found" containerID="73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2" Jan 24 00:47:26.013485 kubelet[1807]: I0124 00:47:26.013380 1807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2"} err="failed to get container status \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"73aea0c2b166fc8f55e87be3f04d7a302dd56615d9d7c51d586837193580f8f2\": not found" Jan 24 00:47:26.013596 kubelet[1807]: I0124 00:47:26.013485 1807 scope.go:117] "RemoveContainer" containerID="1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e" Jan 24 00:47:26.013929 containerd[1480]: time="2026-01-24T00:47:26.013838229Z" level=error msg="ContainerStatus for \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\": not found" Jan 24 00:47:26.014336 kubelet[1807]: E0124 00:47:26.014227 1807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\": not found" containerID="1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e" Jan 24 00:47:26.014336 kubelet[1807]: I0124 00:47:26.014299 1807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e"} err="failed to get container status \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1191795c486ab82b8fc2168f4295d4fb9d8ac7dc4c90464ba1b70da635c9022e\": not found" Jan 24 00:47:26.014336 kubelet[1807]: I0124 00:47:26.014326 1807 scope.go:117] "RemoveContainer" containerID="1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e" Jan 24 00:47:26.014758 containerd[1480]: time="2026-01-24T00:47:26.014641532Z" level=error msg="ContainerStatus for \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\": not found" Jan 24 00:47:26.014809 kubelet[1807]: E0124 00:47:26.014792 1807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\": not found" containerID="1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e" Jan 24 00:47:26.014855 kubelet[1807]: I0124 00:47:26.014818 1807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e"} err="failed to get container status \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dc1464f733a9ac6a40d2677a1edf36981fc669c0e87daa3dd108baa4608137e\": not found" Jan 24 00:47:26.014855 kubelet[1807]: I0124 00:47:26.014839 1807 scope.go:117] "RemoveContainer" containerID="21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da" Jan 24 00:47:26.015202 containerd[1480]: time="2026-01-24T00:47:26.015063217Z" level=error msg="ContainerStatus for \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\": not found" Jan 24 00:47:26.015481 kubelet[1807]: E0124 00:47:26.015443 1807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\": not found" containerID="21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da" Jan 24 00:47:26.015481 kubelet[1807]: I0124 00:47:26.015464 1807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da"} err="failed to get container status \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\": rpc error: code = NotFound desc = an error occurred when try to find container \"21b3fb7b5d23b2c340808814b425c90ec9f329b0c3c412100758290fb0ac61da\": not found" Jan 24 00:47:26.015481 kubelet[1807]: I0124 00:47:26.015477 1807 scope.go:117] "RemoveContainer" containerID="57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38" Jan 24 00:47:26.015905 containerd[1480]: time="2026-01-24T00:47:26.015683772Z" level=error msg="ContainerStatus for \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\": not found" Jan 24 00:47:26.015971 kubelet[1807]: E0124 00:47:26.015883 1807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\": not found" containerID="57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38" Jan 24 00:47:26.015971 kubelet[1807]: I0124 00:47:26.015908 1807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38"} err="failed to get container status \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\": rpc error: code = NotFound desc = an error occurred when try to find container \"57131b892b0d77f45392aa9bcce5cebf3f228ccb3ddbea6b3c9321d3d2d81c38\": not found" Jan 24 00:47:26.149707 kubelet[1807]: E0124 00:47:26.149234 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:27.159336 kubelet[1807]: E0124 00:47:27.150092 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:27.397965 kubelet[1807]: I0124 00:47:27.396734 1807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ca35ef-dd98-4a7a-96f3-457de11743ce" path="/var/lib/kubelet/pods/71ca35ef-dd98-4a7a-96f3-457de11743ce/volumes" Jan 24 00:47:27.946474 systemd[1]: Created slice kubepods-besteffort-pod98b57444_486b_474a_b0f5_4bebdeed70ca.slice - libcontainer container kubepods-besteffort-pod98b57444_486b_474a_b0f5_4bebdeed70ca.slice. Jan 24 00:47:27.964425 systemd[1]: Created slice kubepods-burstable-podcaf44c85_b421_4ed3_818a_b417e5de1cdc.slice - libcontainer container kubepods-burstable-podcaf44c85_b421_4ed3_818a_b417e5de1cdc.slice. Jan 24 00:47:28.117057 kubelet[1807]: I0124 00:47:28.116801 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-xtables-lock\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117057 kubelet[1807]: I0124 00:47:28.116911 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/caf44c85-b421-4ed3-818a-b417e5de1cdc-clustermesh-secrets\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117057 kubelet[1807]: I0124 00:47:28.116949 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-host-proc-sys-kernel\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117057 kubelet[1807]: I0124 00:47:28.116969 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/caf44c85-b421-4ed3-818a-b417e5de1cdc-hubble-tls\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117057 kubelet[1807]: I0124 00:47:28.116993 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-cni-path\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117588 kubelet[1807]: I0124 00:47:28.117019 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98b57444-486b-474a-b0f5-4bebdeed70ca-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-drj4w\" (UID: \"98b57444-486b-474a-b0f5-4bebdeed70ca\") " pod="kube-system/cilium-operator-6f9c7c5859-drj4w" Jan 24 00:47:28.117588 kubelet[1807]: I0124 00:47:28.117042 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-bpf-maps\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117588 kubelet[1807]: I0124 00:47:28.117060 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-hostproc\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117588 kubelet[1807]: I0124 00:47:28.117082 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-etc-cni-netd\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117588 kubelet[1807]: I0124 00:47:28.117200 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caf44c85-b421-4ed3-818a-b417e5de1cdc-cilium-config-path\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117777 kubelet[1807]: I0124 00:47:28.117233 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-cilium-run\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117777 kubelet[1807]: I0124 00:47:28.117275 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-lib-modules\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117777 kubelet[1807]: I0124 00:47:28.117308 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/caf44c85-b421-4ed3-818a-b417e5de1cdc-cilium-ipsec-secrets\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117777 kubelet[1807]: I0124 00:47:28.117341 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-cilium-cgroup\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117777 kubelet[1807]: I0124 00:47:28.117363 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncc7s\" (UniqueName: \"kubernetes.io/projected/98b57444-486b-474a-b0f5-4bebdeed70ca-kube-api-access-ncc7s\") pod \"cilium-operator-6f9c7c5859-drj4w\" (UID: \"98b57444-486b-474a-b0f5-4bebdeed70ca\") " pod="kube-system/cilium-operator-6f9c7c5859-drj4w" Jan 24 00:47:28.117958 kubelet[1807]: I0124 00:47:28.117438 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/caf44c85-b421-4ed3-818a-b417e5de1cdc-host-proc-sys-net\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.117958 kubelet[1807]: I0124 00:47:28.117461 1807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srfsr\" (UniqueName: \"kubernetes.io/projected/caf44c85-b421-4ed3-818a-b417e5de1cdc-kube-api-access-srfsr\") pod \"cilium-tgkh9\" (UID: \"caf44c85-b421-4ed3-818a-b417e5de1cdc\") " pod="kube-system/cilium-tgkh9" Jan 24 00:47:28.159678 kubelet[1807]: E0124 00:47:28.159424 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:28.255927 kubelet[1807]: E0124 00:47:28.255761 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:28.257243 containerd[1480]: time="2026-01-24T00:47:28.257067380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-drj4w,Uid:98b57444-486b-474a-b0f5-4bebdeed70ca,Namespace:kube-system,Attempt:0,}" Jan 24 00:47:28.287395 kubelet[1807]: E0124 00:47:28.287311 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:28.288334 containerd[1480]: time="2026-01-24T00:47:28.288076243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgkh9,Uid:caf44c85-b421-4ed3-818a-b417e5de1cdc,Namespace:kube-system,Attempt:0,}" Jan 24 00:47:28.302052 containerd[1480]: time="2026-01-24T00:47:28.301596593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:47:28.302052 containerd[1480]: time="2026-01-24T00:47:28.301655702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:47:28.302052 containerd[1480]: time="2026-01-24T00:47:28.301669507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:28.302052 containerd[1480]: time="2026-01-24T00:47:28.301866581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:28.344078 systemd[1]: Started cri-containerd-33b04c6420703a93d9bc7bbc3a915eea7c6513ee24158a5a951344f013450a88.scope - libcontainer container 33b04c6420703a93d9bc7bbc3a915eea7c6513ee24158a5a951344f013450a88. Jan 24 00:47:28.348631 containerd[1480]: time="2026-01-24T00:47:28.347907540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:47:28.348631 containerd[1480]: time="2026-01-24T00:47:28.348269538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:47:28.348631 containerd[1480]: time="2026-01-24T00:47:28.348289335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:28.348631 containerd[1480]: time="2026-01-24T00:47:28.348391163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:47:28.399851 systemd[1]: Started cri-containerd-a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe.scope - libcontainer container a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe. Jan 24 00:47:28.427760 containerd[1480]: time="2026-01-24T00:47:28.427658507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-drj4w,Uid:98b57444-486b-474a-b0f5-4bebdeed70ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b04c6420703a93d9bc7bbc3a915eea7c6513ee24158a5a951344f013450a88\"" Jan 24 00:47:28.431444 kubelet[1807]: E0124 00:47:28.430980 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:28.434962 containerd[1480]: time="2026-01-24T00:47:28.434724105Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:47:28.484651 containerd[1480]: time="2026-01-24T00:47:28.484429129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgkh9,Uid:caf44c85-b421-4ed3-818a-b417e5de1cdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\"" Jan 24 00:47:28.486426 kubelet[1807]: E0124 00:47:28.486336 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:28.496361 containerd[1480]: time="2026-01-24T00:47:28.496242711Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:47:28.534685 containerd[1480]: time="2026-01-24T00:47:28.534410141Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379\"" Jan 24 00:47:28.537847 containerd[1480]: time="2026-01-24T00:47:28.537048866Z" level=info msg="StartContainer for \"3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379\"" Jan 24 00:47:28.623024 systemd[1]: Started cri-containerd-3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379.scope - libcontainer container 3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379. Jan 24 00:47:28.707028 containerd[1480]: time="2026-01-24T00:47:28.706870855Z" level=info msg="StartContainer for \"3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379\" returns successfully" Jan 24 00:47:28.725010 systemd[1]: cri-containerd-3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379.scope: Deactivated successfully. Jan 24 00:47:28.841952 containerd[1480]: time="2026-01-24T00:47:28.841383646Z" level=info msg="shim disconnected" id=3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379 namespace=k8s.io Jan 24 00:47:28.841952 containerd[1480]: time="2026-01-24T00:47:28.841540115Z" level=warning msg="cleaning up after shim disconnected" id=3bebe7df79a10af9e3121cdb210903900d5a31c66750a6f4527a0a3c3c250379 namespace=k8s.io Jan 24 00:47:28.841952 containerd[1480]: time="2026-01-24T00:47:28.841562526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:28.992027 kubelet[1807]: E0124 00:47:28.991809 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:29.008681 containerd[1480]: time="2026-01-24T00:47:29.008545306Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:47:29.031988 containerd[1480]: time="2026-01-24T00:47:29.031821210Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec\"" Jan 24 00:47:29.032919 containerd[1480]: time="2026-01-24T00:47:29.032810157Z" level=info msg="StartContainer for \"55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec\"" Jan 24 00:47:29.084474 systemd[1]: Started cri-containerd-55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec.scope - libcontainer container 55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec. Jan 24 00:47:29.146265 containerd[1480]: time="2026-01-24T00:47:29.143844239Z" level=info msg="StartContainer for \"55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec\" returns successfully" Jan 24 00:47:29.157340 systemd[1]: cri-containerd-55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec.scope: Deactivated successfully. Jan 24 00:47:29.160598 kubelet[1807]: E0124 00:47:29.160281 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:29.201644 containerd[1480]: time="2026-01-24T00:47:29.201099932Z" level=info msg="shim disconnected" id=55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec namespace=k8s.io Jan 24 00:47:29.201644 containerd[1480]: time="2026-01-24T00:47:29.201247785Z" level=warning msg="cleaning up after shim disconnected" id=55a090910548bd2ddf44a5f80ea0f13dae6fe6973945b3a1f4a51b777bd051ec namespace=k8s.io Jan 24 00:47:29.201644 containerd[1480]: time="2026-01-24T00:47:29.201259427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:29.477070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318930851.mount: Deactivated successfully. Jan 24 00:47:29.737985 kubelet[1807]: I0124 00:47:29.736962 1807 setters.go:543] "Node became not ready" node="10.0.0.146" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:47:29Z","lastTransitionTime":"2026-01-24T00:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 00:47:30.004017 kubelet[1807]: E0124 00:47:30.003832 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:30.013730 containerd[1480]: time="2026-01-24T00:47:30.013583587Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:47:30.050854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792682271.mount: Deactivated successfully. Jan 24 00:47:30.071009 containerd[1480]: time="2026-01-24T00:47:30.070600306Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc\"" Jan 24 00:47:30.074789 containerd[1480]: time="2026-01-24T00:47:30.072384383Z" level=info msg="StartContainer for \"ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc\"" Jan 24 00:47:30.144609 systemd[1]: Started cri-containerd-ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc.scope - libcontainer container ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc. Jan 24 00:47:30.162201 kubelet[1807]: E0124 00:47:30.161441 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:30.203876 containerd[1480]: time="2026-01-24T00:47:30.203657735Z" level=info msg="StartContainer for \"ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc\" returns successfully" Jan 24 00:47:30.204557 systemd[1]: cri-containerd-ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc.scope: Deactivated successfully. Jan 24 00:47:30.248883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc-rootfs.mount: Deactivated successfully. Jan 24 00:47:30.277030 containerd[1480]: time="2026-01-24T00:47:30.276738012Z" level=info msg="shim disconnected" id=ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc namespace=k8s.io Jan 24 00:47:30.277030 containerd[1480]: time="2026-01-24T00:47:30.276861699Z" level=warning msg="cleaning up after shim disconnected" id=ce86581ed51bc01a98f7505d41df5e743c876b14e5a7c3d2a2b4e122e240debc namespace=k8s.io Jan 24 00:47:30.277030 containerd[1480]: time="2026-01-24T00:47:30.276968378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:30.648345 kubelet[1807]: E0124 00:47:30.647983 1807 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:47:31.021341 kubelet[1807]: E0124 00:47:31.020880 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:31.034368 containerd[1480]: time="2026-01-24T00:47:31.034064767Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:47:31.089075 containerd[1480]: time="2026-01-24T00:47:31.088915200Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a\"" Jan 24 00:47:31.090688 containerd[1480]: time="2026-01-24T00:47:31.090544772Z" level=info msg="StartContainer for \"4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a\"" Jan 24 00:47:31.167574 kubelet[1807]: E0124 00:47:31.167044 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:31.192457 systemd[1]: Started cri-containerd-4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a.scope - libcontainer container 4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a. Jan 24 00:47:31.284924 systemd[1]: cri-containerd-4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a.scope: Deactivated successfully. Jan 24 00:47:31.288060 containerd[1480]: time="2026-01-24T00:47:31.287931573Z" level=info msg="StartContainer for \"4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a\" returns successfully" Jan 24 00:47:31.331366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a-rootfs.mount: Deactivated successfully. Jan 24 00:47:31.375053 containerd[1480]: time="2026-01-24T00:47:31.374930798Z" level=info msg="shim disconnected" id=4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a namespace=k8s.io Jan 24 00:47:31.375053 containerd[1480]: time="2026-01-24T00:47:31.374996720Z" level=warning msg="cleaning up after shim disconnected" id=4c078a14b52ddd0b4afa3e29741cc58531d16692a534d8c9ea1b2f386ef99c2a namespace=k8s.io Jan 24 00:47:31.375053 containerd[1480]: time="2026-01-24T00:47:31.375011177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:47:31.956242 containerd[1480]: time="2026-01-24T00:47:31.955625116Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:31.958588 containerd[1480]: time="2026-01-24T00:47:31.958535637Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:47:31.960093 containerd[1480]: time="2026-01-24T00:47:31.960045448Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:47:31.964870 containerd[1480]: time="2026-01-24T00:47:31.964581101Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.529709373s" Jan 24 00:47:31.964870 containerd[1480]: time="2026-01-24T00:47:31.964654766Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:47:31.998005 containerd[1480]: time="2026-01-24T00:47:31.997446504Z" level=info msg="CreateContainer within sandbox \"33b04c6420703a93d9bc7bbc3a915eea7c6513ee24158a5a951344f013450a88\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:47:32.046033 kubelet[1807]: E0124 00:47:32.045445 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:32.048384 containerd[1480]: time="2026-01-24T00:47:32.048280955Z" level=info msg="CreateContainer within sandbox \"33b04c6420703a93d9bc7bbc3a915eea7c6513ee24158a5a951344f013450a88\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"598ade29bc18624807b2469c3957a8eeba668c4cc544567999e48ad9c6fbba6e\"" Jan 24 00:47:32.049254 containerd[1480]: time="2026-01-24T00:47:32.049027430Z" level=info msg="StartContainer for \"598ade29bc18624807b2469c3957a8eeba668c4cc544567999e48ad9c6fbba6e\"" Jan 24 00:47:32.055772 containerd[1480]: time="2026-01-24T00:47:32.055344814Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:47:32.106844 containerd[1480]: time="2026-01-24T00:47:32.106722577Z" level=info msg="CreateContainer within sandbox \"a199bbb76352bcf68db46bfa0a54067b89be5331dd90807d04c56d29dece4ffe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97\"" Jan 24 00:47:32.107821 containerd[1480]: time="2026-01-24T00:47:32.107750228Z" level=info msg="StartContainer for \"dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97\"" Jan 24 00:47:32.125625 systemd[1]: Started cri-containerd-598ade29bc18624807b2469c3957a8eeba668c4cc544567999e48ad9c6fbba6e.scope - libcontainer container 598ade29bc18624807b2469c3957a8eeba668c4cc544567999e48ad9c6fbba6e. Jan 24 00:47:32.167862 kubelet[1807]: E0124 00:47:32.167396 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:32.183509 systemd[1]: Started cri-containerd-dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97.scope - libcontainer container dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97. Jan 24 00:47:32.240672 containerd[1480]: time="2026-01-24T00:47:32.240004471Z" level=info msg="StartContainer for \"598ade29bc18624807b2469c3957a8eeba668c4cc544567999e48ad9c6fbba6e\" returns successfully" Jan 24 00:47:32.296400 containerd[1480]: time="2026-01-24T00:47:32.294359356Z" level=info msg="StartContainer for \"dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97\" returns successfully" Jan 24 00:47:33.074053 kubelet[1807]: E0124 00:47:33.073667 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:33.080051 kubelet[1807]: E0124 00:47:33.079767 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:33.142763 kubelet[1807]: I0124 00:47:33.142624 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tgkh9" podStartSLOduration=6.142601583 podStartE2EDuration="6.142601583s" podCreationTimestamp="2026-01-24 00:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:47:33.139592861 +0000 UTC m=+109.778324954" watchObservedRunningTime="2026-01-24 00:47:33.142601583 +0000 UTC m=+109.781333676" Jan 24 00:47:33.172924 kubelet[1807]: E0124 00:47:33.172444 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:33.180225 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 00:47:33.198573 kubelet[1807]: I0124 00:47:33.198252 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-drj4w" podStartSLOduration=2.663258195 podStartE2EDuration="6.198226s" podCreationTimestamp="2026-01-24 00:47:27 +0000 UTC" firstStartedPulling="2026-01-24 00:47:28.433649886 +0000 UTC m=+105.072381949" lastFinishedPulling="2026-01-24 00:47:31.968617691 +0000 UTC m=+108.607349754" observedRunningTime="2026-01-24 00:47:33.197661936 +0000 UTC m=+109.836394009" watchObservedRunningTime="2026-01-24 00:47:33.198226 +0000 UTC m=+109.836958073" Jan 24 00:47:34.087358 kubelet[1807]: E0124 00:47:34.086603 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:34.173816 kubelet[1807]: E0124 00:47:34.173331 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:34.284507 kubelet[1807]: E0124 00:47:34.284311 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:35.174608 kubelet[1807]: E0124 00:47:35.174490 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:36.175777 kubelet[1807]: E0124 00:47:36.175401 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:36.917223 systemd[1]: run-containerd-runc-k8s.io-dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97-runc.vNgAzc.mount: Deactivated successfully. Jan 24 00:47:37.177595 kubelet[1807]: E0124 00:47:37.176195 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:37.917055 systemd-networkd[1402]: lxc_health: Link UP Jan 24 00:47:37.925966 systemd-networkd[1402]: lxc_health: Gained carrier Jan 24 00:47:38.177781 kubelet[1807]: E0124 00:47:38.177342 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:38.286575 kubelet[1807]: E0124 00:47:38.285964 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:39.106469 kubelet[1807]: E0124 00:47:39.106239 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:39.178646 kubelet[1807]: E0124 00:47:39.178546 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:39.215594 systemd[1]: run-containerd-runc-k8s.io-dca1f33c74db65d5adc58aa8c0c39400010e1b219c795205cf4cb488af61ca97-runc.00GL7V.mount: Deactivated successfully. Jan 24 00:47:39.745334 systemd-networkd[1402]: lxc_health: Gained IPv6LL Jan 24 00:47:40.112815 kubelet[1807]: E0124 00:47:40.109978 1807 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:47:40.179878 kubelet[1807]: E0124 00:47:40.179596 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:41.182909 kubelet[1807]: E0124 00:47:41.182758 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:42.184222 kubelet[1807]: E0124 00:47:42.184051 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:43.185306 kubelet[1807]: E0124 00:47:43.185241 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:44.187277 kubelet[1807]: E0124 00:47:44.186898 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:44.872920 kubelet[1807]: E0124 00:47:44.872787 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:47:45.033444 containerd[1480]: time="2026-01-24T00:47:45.033278450Z" level=info msg="StopPodSandbox for \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\"" Jan 24 00:47:45.033927 containerd[1480]: time="2026-01-24T00:47:45.033470516Z" level=info msg="TearDown network for sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" successfully" Jan 24 00:47:45.033927 containerd[1480]: time="2026-01-24T00:47:45.033490183Z" level=info msg="StopPodSandbox for \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" returns successfully" Jan 24 00:47:45.033927 containerd[1480]: time="2026-01-24T00:47:45.033913389Z" level=info msg="RemovePodSandbox for \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\"" Jan 24 00:47:45.034031 containerd[1480]: time="2026-01-24T00:47:45.033943635Z" level=info msg="Forcibly stopping sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\"" Jan 24 00:47:45.034031 containerd[1480]: time="2026-01-24T00:47:45.034012232Z" level=info msg="TearDown network for sandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" successfully" Jan 24 00:47:45.042224 containerd[1480]: time="2026-01-24T00:47:45.041977829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:47:45.042224 containerd[1480]: time="2026-01-24T00:47:45.042200493Z" level=info msg="RemovePodSandbox \"9e27080bd9e9f52c9b91d1e35cfeea3f32673e0cc8dcb1d5eb698fad56e3709d\" returns successfully"