Jan 20 00:44:02.218134 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:44:02.218164 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:44:02.218181 kernel: BIOS-provided physical RAM map: Jan 20 00:44:02.218190 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:44:02.218199 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:44:02.218207 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:44:02.218218 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:44:02.218227 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:44:02.218236 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:44:02.218248 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:44:02.218327 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:44:02.218338 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:44:02.218348 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:44:02.218358 kernel: NX (Execute Disable) protection: active Jan 20 00:44:02.218369 kernel: APIC: Static calls initialized Jan 20 00:44:02.218383 kernel: SMBIOS 2.8 present. Jan 20 00:44:02.218393 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:44:02.218445 kernel: Hypervisor detected: KVM Jan 20 00:44:02.218456 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:44:02.218466 kernel: kvm-clock: using sched offset of 6087681069 cycles Jan 20 00:44:02.218476 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:44:02.218486 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:44:02.218496 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:44:02.218507 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:44:02.218517 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:44:02.218531 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:44:02.218541 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:44:02.218551 kernel: Using GB pages for direct mapping Jan 20 00:44:02.218561 kernel: ACPI: Early table checksum verification disabled Jan 20 00:44:02.218571 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:44:02.218581 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218591 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218601 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218614 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:44:02.218624 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218634 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218644 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218654 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:44:02.218664 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:44:02.218674 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:44:02.218689 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:44:02.218703 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:44:02.218714 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:44:02.218724 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:44:02.218735 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:44:02.218745 kernel: No NUMA configuration found Jan 20 00:44:02.218756 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:44:02.218767 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:44:02.218780 kernel: Zone ranges: Jan 20 00:44:02.218791 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:44:02.218801 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:44:02.218811 kernel: Normal empty Jan 20 00:44:02.218822 kernel: Movable zone start for each node Jan 20 00:44:02.218832 kernel: Early memory node ranges Jan 20 00:44:02.218843 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:44:02.218853 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:44:02.218863 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:44:02.218877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:44:02.218888 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:44:02.218898 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:44:02.218909 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:44:02.218919 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:44:02.218930 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:44:02.218941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:44:02.218951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:44:02.218962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:44:02.218975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:44:02.218986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:44:02.218996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:44:02.219007 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:44:02.219017 kernel: TSC deadline timer available Jan 20 00:44:02.219028 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:44:02.219038 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:44:02.219049 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:44:02.219059 kernel: kvm-guest: setup PV sched yield Jan 20 00:44:02.219070 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:44:02.219084 kernel: Booting paravirtualized kernel on KVM Jan 20 00:44:02.219094 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:44:02.219105 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:44:02.219116 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:44:02.219126 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:44:02.219136 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:44:02.219147 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:44:02.219157 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:44:02.219169 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:44:02.219183 kernel: random: crng init done Jan 20 00:44:02.219193 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:44:02.219204 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:44:02.219214 kernel: Fallback order for Node 0: 0 Jan 20 00:44:02.219225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:44:02.219235 kernel: Policy zone: DMA32 Jan 20 00:44:02.219246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:44:02.219314 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136888K reserved, 0K cma-reserved) Jan 20 00:44:02.219331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:44:02.219342 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:44:02.219352 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:44:02.219363 kernel: Dynamic Preempt: voluntary Jan 20 00:44:02.219373 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:44:02.219384 kernel: rcu: RCU event tracing is enabled. Jan 20 00:44:02.219395 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:44:02.219442 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:44:02.219454 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:44:02.219468 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:44:02.219479 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:44:02.219489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:44:02.219500 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:44:02.219511 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:44:02.219521 kernel: Console: colour VGA+ 80x25 Jan 20 00:44:02.219532 kernel: printk: console [ttyS0] enabled Jan 20 00:44:02.219542 kernel: ACPI: Core revision 20230628 Jan 20 00:44:02.219553 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:44:02.219566 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:44:02.219577 kernel: x2apic enabled Jan 20 00:44:02.219587 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:44:02.219598 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:44:02.219609 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:44:02.219620 kernel: kvm-guest: setup PV IPIs Jan 20 00:44:02.219631 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:44:02.219654 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:44:02.219666 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:44:02.219677 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:44:02.219688 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:44:02.219699 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:44:02.219713 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:44:02.219724 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:44:02.219735 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:44:02.219747 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:44:02.219758 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:44:02.219773 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:44:02.219784 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:44:02.219795 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:44:02.219806 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:44:02.219817 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:44:02.219829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:44:02.219840 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:44:02.219851 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:44:02.219866 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:44:02.219877 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:44:02.219888 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:44:02.219899 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:44:02.219910 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:44:02.219921 kernel: landlock: Up and running. Jan 20 00:44:02.219932 kernel: SELinux: Initializing. Jan 20 00:44:02.219943 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:44:02.219955 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:44:02.219969 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:44:02.219981 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:44:02.219992 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:44:02.220003 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:44:02.220015 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:44:02.220026 kernel: signal: max sigframe size: 1776 Jan 20 00:44:02.220037 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:44:02.220048 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:44:02.220059 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:44:02.220074 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:44:02.220085 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:44:02.220096 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:44:02.220107 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:44:02.220118 kernel: smpboot: Max logical packages: 1 Jan 20 00:44:02.220129 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:44:02.220140 kernel: devtmpfs: initialized Jan 20 00:44:02.220151 kernel: x86/mm: Memory block size: 128MB Jan 20 00:44:02.220162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:44:02.220177 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:44:02.220188 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:44:02.220199 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:44:02.220210 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:44:02.220222 kernel: audit: type=2000 audit(1768869839.809:1): state=initialized audit_enabled=0 res=1 Jan 20 00:44:02.220233 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:44:02.220244 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:44:02.220358 kernel: cpuidle: using governor menu Jan 20 00:44:02.220372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:44:02.220388 kernel: dca service started, version 1.12.1 Jan 20 00:44:02.220436 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:44:02.220448 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:44:02.220460 kernel: PCI: Using configuration type 1 for base access Jan 20 00:44:02.220471 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:44:02.220482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:44:02.220493 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:44:02.220505 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:44:02.220516 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:44:02.220531 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:44:02.220542 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:44:02.220553 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:44:02.220564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:44:02.220575 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:44:02.220586 kernel: ACPI: Interpreter enabled Jan 20 00:44:02.220597 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:44:02.220608 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:44:02.220619 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:44:02.220634 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:44:02.220645 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:44:02.220656 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:44:02.220881 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:44:02.221057 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:44:02.221220 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:44:02.221235 kernel: PCI host bridge to bus 0000:00 Jan 20 00:44:02.221521 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:44:02.221676 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:44:02.221815 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:44:02.221956 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:44:02.222095 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:44:02.222234 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:44:02.222496 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:44:02.222679 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:44:02.222844 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:44:02.223001 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:44:02.223153 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:44:02.223384 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:44:02.223594 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:44:02.223768 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:44:02.223971 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:44:02.224130 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:44:02.224468 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:44:02.224653 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:44:02.224816 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:44:02.224977 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:44:02.225139 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:44:02.225384 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:44:02.225592 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:44:02.225747 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:44:02.225899 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:44:02.226049 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:44:02.226215 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:44:02.226502 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:44:02.226668 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:44:02.226820 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:44:02.226970 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:44:02.227130 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:44:02.227351 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:44:02.227367 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:44:02.227383 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:44:02.227393 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:44:02.227447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:44:02.227458 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:44:02.227469 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:44:02.227480 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:44:02.227490 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:44:02.227500 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:44:02.227514 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:44:02.227524 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:44:02.227534 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:44:02.227544 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:44:02.227554 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:44:02.227565 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:44:02.227575 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:44:02.227585 kernel: iommu: Default domain type: Translated Jan 20 00:44:02.227595 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:44:02.227605 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:44:02.227619 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:44:02.227629 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:44:02.227639 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:44:02.227800 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:44:02.227953 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:44:02.228103 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:44:02.228117 kernel: vgaarb: loaded Jan 20 00:44:02.228127 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:44:02.228142 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:44:02.228152 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:44:02.228163 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:44:02.228173 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:44:02.228183 kernel: pnp: PnP ACPI init Jan 20 00:44:02.228520 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:44:02.228538 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:44:02.228551 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:44:02.228594 kernel: NET: Registered PF_INET protocol family Jan 20 00:44:02.228630 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:44:02.228641 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:44:02.228675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:44:02.228687 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:44:02.228720 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:44:02.228733 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:44:02.228766 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:44:02.228778 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:44:02.228816 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:44:02.228827 kernel: NET: Registered PF_XDP protocol family Jan 20 00:44:02.229064 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:44:02.229222 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:44:02.229501 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:44:02.229648 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:44:02.229789 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:44:02.229930 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:44:02.229949 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:44:02.229960 kernel: Initialise system trusted keyrings Jan 20 00:44:02.229971 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:44:02.229982 kernel: Key type asymmetric registered Jan 20 00:44:02.229993 kernel: Asymmetric key parser 'x509' registered Jan 20 00:44:02.230003 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:44:02.230014 kernel: io scheduler mq-deadline registered Jan 20 00:44:02.230025 kernel: io scheduler kyber registered Jan 20 00:44:02.230035 kernel: io scheduler bfq registered Jan 20 00:44:02.230049 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:44:02.230060 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:44:02.230071 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:44:02.230082 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:44:02.230093 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:44:02.230103 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:44:02.230114 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:44:02.230125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:44:02.230135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:44:02.230381 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:44:02.230443 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:44:02.230629 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:44:02.230779 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:44:01 UTC (1768869841) Jan 20 00:44:02.230922 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:44:02.230935 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:44:02.230945 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:44:02.230956 kernel: Segment Routing with IPv6 Jan 20 00:44:02.230970 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:44:02.230981 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:44:02.230991 kernel: Key type dns_resolver registered Jan 20 00:44:02.231001 kernel: IPI shorthand broadcast: enabled Jan 20 00:44:02.231011 kernel: sched_clock: Marking stable (1660034953, 508264360)->(2429636649, -261337336) Jan 20 00:44:02.231021 kernel: registered taskstats version 1 Jan 20 00:44:02.231032 kernel: Loading compiled-in X.509 certificates Jan 20 00:44:02.231042 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:44:02.231052 kernel: Key type .fscrypt registered Jan 20 00:44:02.231065 kernel: Key type fscrypt-provisioning registered Jan 20 00:44:02.231075 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:44:02.231085 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:44:02.231096 kernel: ima: No architecture policies found Jan 20 00:44:02.231106 kernel: clk: Disabling unused clocks Jan 20 00:44:02.231116 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:44:02.231126 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:44:02.231137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:44:02.231147 kernel: Run /init as init process Jan 20 00:44:02.231161 kernel: with arguments: Jan 20 00:44:02.231171 kernel: /init Jan 20 00:44:02.231181 kernel: with environment: Jan 20 00:44:02.231191 kernel: HOME=/ Jan 20 00:44:02.231200 kernel: TERM=linux Jan 20 00:44:02.231213 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:44:02.231225 systemd[1]: Detected virtualization kvm. Jan 20 00:44:02.231237 systemd[1]: Detected architecture x86-64. Jan 20 00:44:02.231250 systemd[1]: Running in initrd. Jan 20 00:44:02.231329 systemd[1]: No hostname configured, using default hostname. Jan 20 00:44:02.231341 systemd[1]: Hostname set to . Jan 20 00:44:02.231352 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:44:02.231363 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:44:02.231374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:44:02.231385 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:44:02.231397 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:44:02.231449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:44:02.231461 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:44:02.231472 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:44:02.231516 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:44:02.231527 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:44:02.231538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:44:02.231553 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:44:02.231564 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:44:02.231575 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:44:02.231586 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:44:02.231612 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:44:02.231626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:44:02.231638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:44:02.231652 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:44:02.231664 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:44:02.231675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:44:02.231687 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:44:02.231698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:44:02.231709 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:44:02.231721 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:44:02.231735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:44:02.231749 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:44:02.231761 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:44:02.231772 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:44:02.231783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:44:02.231794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:44:02.231831 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:44:02.231860 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:44:02.231871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:44:02.231883 systemd-journald[194]: Journal started Jan 20 00:44:02.231908 systemd-journald[194]: Runtime Journal (/run/log/journal/09f3f8e150be4585a5d3e4975d9470b5) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:44:02.245352 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:44:02.249687 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:44:02.250067 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:44:02.267554 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:44:02.271503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:44:02.278583 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:44:02.280889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:44:02.318581 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:44:02.319075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:44:02.504579 kernel: Bridge firewalling registered Jan 20 00:44:02.320734 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:44:02.504971 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:44:02.505394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:44:02.544879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:44:02.545667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:44:02.558573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:44:02.589957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:44:02.602685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:44:02.604504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:44:02.613457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:44:02.640102 dracut-cmdline[234]: dracut-dracut-053 Jan 20 00:44:02.643968 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:44:02.657751 systemd-resolved[227]: Positive Trust Anchors: Jan 20 00:44:02.657763 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:44:02.657789 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:44:02.660355 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 20 00:44:02.661773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:44:02.665698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:44:02.749379 kernel: SCSI subsystem initialized Jan 20 00:44:02.764378 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:44:02.784697 kernel: iscsi: registered transport (tcp) Jan 20 00:44:02.811802 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:44:02.811899 kernel: QLogic iSCSI HBA Driver Jan 20 00:44:02.886925 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:44:02.900652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:44:02.951790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:44:02.951877 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:44:02.956432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:44:03.016650 kernel: raid6: avx2x4 gen() 21366 MB/s Jan 20 00:44:03.035457 kernel: raid6: avx2x2 gen() 20605 MB/s Jan 20 00:44:03.057389 kernel: raid6: avx2x1 gen() 12694 MB/s Jan 20 00:44:03.057511 kernel: raid6: using algorithm avx2x4 gen() 21366 MB/s Jan 20 00:44:03.080495 kernel: raid6: .... xor() 5139 MB/s, rmw enabled Jan 20 00:44:03.080545 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:44:03.114611 kernel: xor: automatically using best checksumming function avx Jan 20 00:44:03.359687 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:44:03.379578 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:44:03.420132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:44:03.439872 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 20 00:44:03.445590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:44:03.466953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:44:03.507357 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 20 00:44:03.554857 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:44:03.565513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:44:03.720955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:44:03.740616 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:44:03.768913 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:44:03.781009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:44:03.809694 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:44:03.815442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:44:03.834181 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:44:03.834572 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:44:03.837522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:44:03.854969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:44:03.855232 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:44:03.868668 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:44:03.875664 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:44:03.913462 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:44:03.913490 kernel: GPT:9289727 != 19775487 Jan 20 00:44:03.913512 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:44:03.913546 kernel: GPT:9289727 != 19775487 Jan 20 00:44:03.913578 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:44:03.913607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:44:03.913649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:44:03.919474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:44:03.932043 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:44:03.952472 kernel: libata version 3.00 loaded. Jan 20 00:44:03.958585 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:44:03.958649 kernel: AES CTR mode by8 optimization enabled Jan 20 00:44:03.965549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:44:03.977364 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:44:03.977694 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:44:03.977722 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:44:03.996218 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:44:03.999602 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:44:04.059109 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Jan 20 00:44:04.059133 kernel: scsi host0: ahci Jan 20 00:44:04.059477 kernel: scsi host1: ahci Jan 20 00:44:04.059640 kernel: scsi host2: ahci Jan 20 00:44:04.059787 kernel: scsi host3: ahci Jan 20 00:44:04.059965 kernel: scsi host4: ahci Jan 20 00:44:04.060177 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 20 00:44:04.060190 kernel: scsi host5: ahci Jan 20 00:44:04.060481 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 20 00:44:04.060492 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 20 00:44:04.060502 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 20 00:44:04.060511 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 20 00:44:04.060520 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 20 00:44:04.060528 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 20 00:44:04.077695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:44:04.284990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:44:04.305563 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:44:04.306043 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:44:04.314948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:44:04.348040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:44:04.362830 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:44:04.362862 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:44:04.362874 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:44:04.354323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:44:04.403151 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:44:04.403175 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:44:04.403186 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:44:04.403196 kernel: ata3.00: applying bridge limits Jan 20 00:44:04.403206 kernel: ata3.00: configured for UDMA/100 Jan 20 00:44:04.403216 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:44:04.370518 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:44:04.418712 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:44:04.433933 disk-uuid[562]: Primary Header is updated. Jan 20 00:44:04.433933 disk-uuid[562]: Secondary Entries is updated. Jan 20 00:44:04.433933 disk-uuid[562]: Secondary Header is updated. Jan 20 00:44:04.441047 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:44:04.441383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:44:04.474361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:44:04.536723 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:44:04.536988 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:44:04.553448 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:44:05.472457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:44:05.473404 disk-uuid[567]: The operation has completed successfully. Jan 20 00:44:05.531127 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:44:05.531461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:44:05.568665 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:44:05.591070 sh[597]: Success Jan 20 00:44:05.616354 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:44:05.676181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:44:05.702720 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:44:05.711104 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:44:05.733631 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:44:05.733829 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:44:05.733848 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:44:05.736795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:44:05.739311 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:44:05.755186 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:44:05.756818 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:44:05.773687 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:44:05.786511 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:44:05.806719 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:44:05.806813 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:44:05.806830 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:44:05.813397 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:44:05.828903 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:44:05.838108 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:44:05.847538 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:44:05.861741 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:44:05.936937 ignition[671]: Ignition 2.19.0 Jan 20 00:44:05.936949 ignition[671]: Stage: fetch-offline Jan 20 00:44:05.936990 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:05.937000 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:05.937117 ignition[671]: parsed url from cmdline: "" Jan 20 00:44:05.937121 ignition[671]: no config URL provided Jan 20 00:44:05.937128 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:44:05.937137 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:44:05.955086 unknown[671]: fetched base config from "system" Jan 20 00:44:05.937168 ignition[671]: op(1): [started] loading QEMU firmware config module Jan 20 00:44:05.955094 unknown[671]: fetched user config from "qemu" Jan 20 00:44:05.937174 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:44:05.962708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:44:05.948842 ignition[671]: op(1): [finished] loading QEMU firmware config module Jan 20 00:44:05.950556 ignition[671]: parsing config with SHA512: 917d75b120cfd056bad2aaa795f5a8ef0460c063048860ccb96c24329c59e61ba92d2cd7c7f16702ed0c5e50b9db97ca0fdc5db63aa3348585b926108f7f8809 Jan 20 00:44:05.956459 ignition[671]: fetch-offline: fetch-offline passed Jan 20 00:44:05.958521 ignition[671]: Ignition finished successfully Jan 20 00:44:06.049194 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:44:06.074835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:44:06.121023 systemd-networkd[786]: lo: Link UP Jan 20 00:44:06.121055 systemd-networkd[786]: lo: Gained carrier Jan 20 00:44:06.123502 systemd-networkd[786]: Enumeration completed Jan 20 00:44:06.123794 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:44:06.124905 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:44:06.124910 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:44:06.127113 systemd-networkd[786]: eth0: Link UP Jan 20 00:44:06.127119 systemd-networkd[786]: eth0: Gained carrier Jan 20 00:44:06.127128 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:44:06.131804 systemd[1]: Reached target network.target - Network. Jan 20 00:44:06.188403 ignition[788]: Ignition 2.19.0 Jan 20 00:44:06.137206 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:44:06.188449 ignition[788]: Stage: kargs Jan 20 00:44:06.153904 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:44:06.189002 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:06.165761 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:44:06.189035 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:06.196781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:44:06.190077 ignition[788]: kargs: kargs passed Jan 20 00:44:06.228964 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:44:06.190134 ignition[788]: Ignition finished successfully Jan 20 00:44:06.249384 ignition[797]: Ignition 2.19.0 Jan 20 00:44:06.249472 ignition[797]: Stage: disks Jan 20 00:44:06.252574 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:44:06.249720 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:06.258975 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:44:06.249732 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:06.260997 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:44:06.250252 ignition[797]: disks: disks passed Jan 20 00:44:06.267383 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:44:06.250364 ignition[797]: Ignition finished successfully Jan 20 00:44:06.268994 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:44:06.273928 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:44:06.356758 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:44:06.303665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:44:06.335027 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.96 Jan 20 00:44:06.335036 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 20 00:44:06.346712 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:44:06.376943 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:44:06.527456 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:44:06.529506 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:44:06.534515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:44:06.558802 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:44:06.575558 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jan 20 00:44:06.565496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:44:06.607642 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:44:06.607667 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:44:06.607687 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:44:06.607697 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:44:06.575974 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:44:06.576033 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:44:06.576069 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:44:06.595793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:44:06.597674 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:44:06.610113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:44:06.678047 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:44:06.700455 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:44:06.708599 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:44:06.716186 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:44:07.003034 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:44:07.024834 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:44:07.030074 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:44:07.041112 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:44:07.049528 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:44:07.070349 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:44:07.093597 ignition[930]: INFO : Ignition 2.19.0 Jan 20 00:44:07.093597 ignition[930]: INFO : Stage: mount Jan 20 00:44:07.098408 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:07.098408 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:07.098408 ignition[930]: INFO : mount: mount passed Jan 20 00:44:07.098408 ignition[930]: INFO : Ignition finished successfully Jan 20 00:44:07.113993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:44:07.138523 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:44:07.154229 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:44:07.172364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 20 00:44:07.179588 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:44:07.179631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:44:07.179643 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:44:07.199370 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:44:07.201826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:44:07.240694 ignition[959]: INFO : Ignition 2.19.0 Jan 20 00:44:07.240694 ignition[959]: INFO : Stage: files Jan 20 00:44:07.245695 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:07.245695 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:07.253372 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:44:07.257737 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:44:07.257737 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:44:07.268034 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:44:07.273228 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:44:07.279369 unknown[959]: wrote ssh authorized keys file for user: core Jan 20 00:44:07.286671 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:44:07.286671 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 00:44:07.607034 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 20 00:44:07.832683 systemd-networkd[786]: eth0: Gained IPv6LL Jan 20 00:44:08.101617 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:44:08.101617 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 20 00:44:08.113184 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:44:08.120126 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:44:08.129654 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 20 00:44:08.129654 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:44:08.181498 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:44:08.196040 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:44:08.203573 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:44:08.211403 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:44:08.220591 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:44:08.229229 ignition[959]: INFO : files: files passed Jan 20 00:44:08.232152 ignition[959]: INFO : Ignition finished successfully Jan 20 00:44:08.237035 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:44:08.250857 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:44:08.259715 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:44:08.270093 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:44:08.273977 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:44:08.283991 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:44:08.289357 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:44:08.289357 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:44:08.299363 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:44:08.308065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:44:08.316172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:44:08.333667 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:44:08.383671 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:44:08.383851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:44:08.392525 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:44:08.399886 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:44:08.406900 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:44:08.416600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:44:08.446885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:44:08.471806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:44:08.494178 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:44:08.499937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:44:08.508873 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:44:08.517165 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:44:08.517498 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:44:08.526704 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:44:08.534337 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:44:08.545010 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:44:08.555134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:44:08.564804 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:44:08.574678 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:44:08.584043 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:44:08.593547 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:44:08.601561 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:44:08.610067 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:44:08.617318 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:44:08.617556 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:44:08.622875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:44:08.630472 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:44:08.636505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:44:08.636990 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:44:08.645688 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:44:08.645850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:44:08.656346 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:44:08.656652 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:44:08.664065 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:44:08.672630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:44:08.672956 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:44:08.682035 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:44:08.692096 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:44:08.702221 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:44:08.702505 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:44:08.710702 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:44:08.710842 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:44:08.719797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:44:08.719997 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:44:08.818885 ignition[1013]: INFO : Ignition 2.19.0 Jan 20 00:44:08.818885 ignition[1013]: INFO : Stage: umount Jan 20 00:44:08.818885 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:44:08.818885 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:44:08.818885 ignition[1013]: INFO : umount: umount passed Jan 20 00:44:08.818885 ignition[1013]: INFO : Ignition finished successfully Jan 20 00:44:08.729506 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:44:08.729723 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:44:08.758726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:44:08.768863 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:44:08.773403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:44:08.773634 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:44:08.785110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:44:08.785513 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:44:08.799392 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:44:08.799622 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:44:08.809213 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:44:08.810119 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:44:08.810374 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:44:08.821967 systemd[1]: Stopped target network.target - Network. Jan 20 00:44:08.826954 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:44:08.827038 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:44:08.835569 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:44:08.835639 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:44:08.844174 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:44:08.844241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:44:08.852591 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:44:08.852666 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:44:08.860833 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:44:08.868500 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:44:08.877112 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:44:08.877395 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 20 00:44:08.877483 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:44:08.887216 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:44:08.887548 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:44:08.895689 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:44:08.895896 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:44:08.908737 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:44:08.908803 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:44:08.918012 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:44:08.918094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:44:08.954931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:44:08.964644 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:44:08.964815 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:44:08.970838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:44:08.971360 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:44:08.978894 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:44:08.979048 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:44:08.985094 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:44:08.985223 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:44:08.990142 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:44:09.011635 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:44:09.011814 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:44:09.022057 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:44:09.022405 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:44:09.033745 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:44:09.033861 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:44:09.283031 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:44:09.042987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:44:09.043052 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:44:09.053059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:44:09.053182 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:44:09.063740 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:44:09.063866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:44:09.073872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:44:09.073957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:44:09.111858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:44:09.120751 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:44:09.120866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:44:09.130409 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:44:09.130563 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:44:09.141528 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:44:09.141608 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:44:09.152386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:44:09.152533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:44:09.159335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:44:09.159565 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:44:09.170367 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:44:09.203697 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:44:09.222551 systemd[1]: Switching root. Jan 20 00:44:09.375246 systemd-journald[194]: Journal stopped Jan 20 00:44:11.291895 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:44:11.291994 kernel: SELinux: policy capability open_perms=1 Jan 20 00:44:11.292013 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:44:11.292030 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:44:11.292053 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:44:11.292069 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:44:11.292086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:44:11.292102 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:44:11.292126 kernel: audit: type=1403 audit(1768869849.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:44:11.292151 systemd[1]: Successfully loaded SELinux policy in 69.115ms. Jan 20 00:44:11.292177 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.370ms. Jan 20 00:44:11.292195 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:44:11.292214 systemd[1]: Detected virtualization kvm. Jan 20 00:44:11.292230 systemd[1]: Detected architecture x86-64. Jan 20 00:44:11.292332 systemd[1]: Detected first boot. Jan 20 00:44:11.292353 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:44:11.292370 zram_generator::config[1059]: No configuration found. Jan 20 00:44:11.292388 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:44:11.292405 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:44:11.292477 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:44:11.292502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:44:11.292522 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:44:11.292550 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:44:11.292569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:44:11.292586 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:44:11.292604 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:44:11.292623 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:44:11.292641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:44:11.292660 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:44:11.292677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:44:11.292701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:44:11.292720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:44:11.292739 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:44:11.292758 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:44:11.292779 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:44:11.292796 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:44:11.292817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:44:11.292835 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:44:11.292851 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:44:11.292872 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:44:11.292891 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:44:11.292908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:44:11.292927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:44:11.292945 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:44:11.292963 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:44:11.292982 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:44:11.293001 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:44:11.293025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:44:11.293044 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:44:11.293061 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:44:11.293079 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:44:11.293098 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:44:11.293117 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:44:11.293135 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:44:11.293155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:44:11.293173 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:44:11.293196 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:44:11.293215 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:44:11.293235 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:44:11.293253 systemd[1]: Reached target machines.target - Containers. Jan 20 00:44:11.293356 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:44:11.293377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:44:11.293396 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:44:11.293414 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:44:11.293503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:44:11.293523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:44:11.293542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:44:11.293560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:44:11.293587 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:44:11.293606 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:44:11.293626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:44:11.293646 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:44:11.293664 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:44:11.293687 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:44:11.293704 kernel: fuse: init (API version 7.39) Jan 20 00:44:11.293722 kernel: loop: module loaded Jan 20 00:44:11.293739 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:44:11.293756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:44:11.293774 kernel: ACPI: bus type drm_connector registered Jan 20 00:44:11.293789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:44:11.293837 systemd-journald[1140]: Collecting audit messages is disabled. Jan 20 00:44:11.293875 systemd-journald[1140]: Journal started Jan 20 00:44:11.293903 systemd-journald[1140]: Runtime Journal (/run/log/journal/09f3f8e150be4585a5d3e4975d9470b5) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:44:10.337751 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:44:10.381161 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:44:10.387504 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:44:10.389459 systemd[1]: systemd-journald.service: Consumed 2.087s CPU time. Jan 20 00:44:11.303464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:44:11.323182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:44:11.324225 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:44:11.326219 systemd[1]: Stopped verity-setup.service. Jan 20 00:44:11.335227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:44:11.346523 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:44:11.348050 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:44:11.352210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:44:11.357645 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:44:11.361814 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:44:11.366166 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:44:11.370864 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:44:11.375007 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:44:11.379860 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:44:11.388702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:44:11.389016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:44:11.394244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:44:11.394610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:44:11.399233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:44:11.399663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:44:11.404757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:44:11.405030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:44:11.410762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:44:11.411051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:44:11.416163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:44:11.417792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:44:11.423378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:44:11.429680 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:44:11.438949 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:44:11.462955 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:44:11.480493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:44:11.490096 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:44:11.495662 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:44:11.495744 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:44:11.502746 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:44:11.510220 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:44:11.518007 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:44:11.523128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:44:11.525694 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:44:11.535164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:44:11.543759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:44:11.545848 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:44:11.550637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:44:11.554699 systemd-journald[1140]: Time spent on flushing to /var/log/journal/09f3f8e150be4585a5d3e4975d9470b5 is 27.750ms for 926 entries. Jan 20 00:44:11.554699 systemd-journald[1140]: System Journal (/var/log/journal/09f3f8e150be4585a5d3e4975d9470b5) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:44:11.595241 systemd-journald[1140]: Received client request to flush runtime journal. Jan 20 00:44:11.554709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:44:11.585188 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:44:11.595106 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:44:11.607638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:44:11.613528 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:44:11.622247 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:44:11.637992 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:44:11.637194 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:44:11.651156 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:44:11.669491 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:44:11.677375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:44:11.683196 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:44:11.683217 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:44:11.692464 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:44:11.711059 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:44:11.715047 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:44:11.731023 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:44:11.739497 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:44:11.753780 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:44:11.763007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:44:11.764915 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:44:11.785353 kernel: loop1: detected capacity change from 0 to 219144 Jan 20 00:44:11.789745 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:44:11.823224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:44:11.843635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:44:11.853339 kernel: loop2: detected capacity change from 0 to 142488 Jan 20 00:44:11.879012 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 20 00:44:11.879072 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 20 00:44:11.888225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:44:11.923538 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:44:11.956505 kernel: loop4: detected capacity change from 0 to 219144 Jan 20 00:44:11.992080 kernel: loop5: detected capacity change from 0 to 142488 Jan 20 00:44:12.033576 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:44:12.034236 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 20 00:44:12.041103 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:44:12.041158 systemd[1]: Reloading... Jan 20 00:44:12.122340 zram_generator::config[1223]: No configuration found. Jan 20 00:44:12.259356 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:44:12.296971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:44:12.357178 systemd[1]: Reloading finished in 315 ms. Jan 20 00:44:12.400964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:44:12.406159 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:44:12.411884 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:44:12.438769 systemd[1]: Starting ensure-sysext.service... Jan 20 00:44:12.444897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:44:12.464668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:44:12.475763 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:44:12.475816 systemd[1]: Reloading... Jan 20 00:44:12.480554 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:44:12.481112 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:44:12.482766 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:44:12.483148 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:44:12.483348 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:44:12.487912 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:44:12.487929 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:44:12.504770 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:44:12.504783 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:44:12.508010 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Jan 20 00:44:12.551712 zram_generator::config[1296]: No configuration found. Jan 20 00:44:12.633383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1309) Jan 20 00:44:12.735588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:44:12.770357 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:44:12.782415 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:44:12.806099 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:44:12.806579 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:44:12.811564 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:44:12.822407 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:44:12.905830 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:44:12.911785 systemd[1]: Reloading finished in 435 ms. Jan 20 00:44:13.120516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:44:13.128380 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:44:13.227774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:44:13.360015 systemd[1]: Finished ensure-sysext.service. Jan 20 00:44:13.371891 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:44:13.381918 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:44:13.406797 kernel: kvm_amd: TSC scaling supported Jan 20 00:44:13.406883 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:44:13.406933 kernel: kvm_amd: Nested Paging enabled Jan 20 00:44:13.406957 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:44:13.414902 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:44:13.469889 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:44:13.501544 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:44:13.504339 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:44:13.510576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:44:13.513522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:44:13.522658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:44:13.534962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:44:13.552583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:44:13.558805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:44:13.563611 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:44:13.596830 augenrules[1384]: No rules Jan 20 00:44:13.596790 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:44:13.608669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:44:13.616861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:44:13.621389 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:44:13.624543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:44:13.628939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:44:13.631790 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:44:13.633373 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:44:13.634211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:44:13.638664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:44:13.638874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:44:13.656686 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:44:13.657029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:44:13.662977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:44:13.663841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:44:13.669155 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:44:13.669619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:44:13.674876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:44:13.703669 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:44:13.703883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:44:13.703985 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:44:13.707342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:44:13.708587 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:44:13.710563 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:44:13.724143 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:44:13.913977 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:44:13.946399 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:44:13.965833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:44:14.006652 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:44:14.008988 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:44:14.011162 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:44:14.014389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:44:14.047659 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:44:14.054806 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:44:14.143911 kernel: hrtimer: interrupt took 3175751 ns Jan 20 00:44:14.247091 systemd-networkd[1391]: lo: Link UP Jan 20 00:44:14.247770 systemd-networkd[1391]: lo: Gained carrier Jan 20 00:44:14.248406 systemd-resolved[1392]: Positive Trust Anchors: Jan 20 00:44:14.248479 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:44:14.248526 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:44:14.250836 systemd-networkd[1391]: Enumeration completed Jan 20 00:44:14.252509 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:44:14.252551 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:44:14.254646 systemd-networkd[1391]: eth0: Link UP Jan 20 00:44:14.254691 systemd-networkd[1391]: eth0: Gained carrier Jan 20 00:44:14.254710 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:44:14.255387 systemd-resolved[1392]: Defaulting to hostname 'linux'. Jan 20 00:44:14.276611 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:44:14.277898 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jan 20 00:44:14.830882 systemd-resolved[1392]: Clock change detected. Flushing caches. Jan 20 00:44:14.830936 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:44:14.831046 systemd-timesyncd[1393]: Initial clock synchronization to Tue 2026-01-20 00:44:14.830702 UTC. Jan 20 00:44:14.937190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:44:14.963174 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:44:14.969196 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:44:14.977589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:44:14.983399 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:44:14.993436 systemd[1]: Reached target network.target - Network. Jan 20 00:44:14.997172 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:44:15.004723 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:44:15.009539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:44:15.015128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:44:15.022383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:44:15.031048 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:44:15.031161 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:44:15.043461 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:44:15.048018 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:44:15.052612 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:44:15.057694 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:44:15.066020 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:44:15.074313 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:44:15.089581 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:44:15.095708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:44:15.102328 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:44:15.107223 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:44:15.112065 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:44:15.116248 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:44:15.116342 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:44:15.118336 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:44:15.125024 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:44:15.132569 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:44:15.142464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:44:15.147571 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:44:15.149674 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:44:15.154348 jq[1432]: false Jan 20 00:44:15.156077 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:44:15.168043 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:44:15.179111 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:44:15.183741 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:44:15.184627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:44:15.187085 extend-filesystems[1433]: Found loop3 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found loop4 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found loop5 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found sr0 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda1 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda2 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda3 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found usr Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda4 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda6 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda7 Jan 20 00:44:15.190406 extend-filesystems[1433]: Found vda9 Jan 20 00:44:15.190406 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 20 00:44:15.260578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1330) Jan 20 00:44:15.211378 dbus-daemon[1431]: [system] SELinux support is enabled Jan 20 00:44:15.193514 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:44:15.261213 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 20 00:44:15.265015 update_engine[1444]: I20260120 00:44:15.243528 1444 main.cc:92] Flatcar Update Engine starting Jan 20 00:44:15.265015 update_engine[1444]: I20260120 00:44:15.245779 1444 update_check_scheduler.cc:74] Next update check in 9m46s Jan 20 00:44:15.219473 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:44:15.269407 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:44:15.284751 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:44:15.284900 jq[1451]: true Jan 20 00:44:15.269221 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:44:15.297614 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:44:15.298077 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:44:15.298599 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:44:15.300107 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:44:15.311314 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:44:15.311634 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:44:15.355930 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:44:15.355551 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:44:15.405683 jq[1456]: true Jan 20 00:44:15.355577 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:44:15.356254 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:44:15.359292 systemd-logind[1439]: New seat seat0. Jan 20 00:44:15.368629 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:44:15.400187 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:44:15.406254 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:44:15.406475 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:44:15.412556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:44:15.412733 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:44:15.413285 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:44:15.413285 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:44:15.413285 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:44:15.435236 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:44:15.435449 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 20 00:44:15.441452 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:44:15.451697 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:44:15.452125 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:44:15.458463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:44:15.460388 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:44:15.463310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:44:15.482503 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:44:15.487380 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:44:15.495314 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:44:15.495573 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:44:15.497782 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:44:15.510413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:44:15.534662 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:44:15.549388 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:44:15.557102 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:44:15.562237 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:44:16.768255 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 20 00:44:16.779689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:44:18.180356 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:44:18.186225 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:44:18.204594 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:44:18.212034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:44:18.226498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:44:18.227319 containerd[1457]: time="2026-01-20T00:44:18.227121178Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:44:18.257604 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Jan 20 00:44:18.296045 containerd[1457]: time="2026-01-20T00:44:18.295651276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.306923856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307022781Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307089015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307390698Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307419932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307549183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:44:18.307646 containerd[1457]: time="2026-01-20T00:44:18.307574381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.308255 containerd[1457]: time="2026-01-20T00:44:18.308127764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:44:18.308255 containerd[1457]: time="2026-01-20T00:44:18.308155616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.308255 containerd[1457]: time="2026-01-20T00:44:18.308179791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:44:18.308255 containerd[1457]: time="2026-01-20T00:44:18.308197785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.308421 containerd[1457]: time="2026-01-20T00:44:18.308361540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.309514 containerd[1457]: time="2026-01-20T00:44:18.308911808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:44:18.309514 containerd[1457]: time="2026-01-20T00:44:18.309161204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:44:18.309514 containerd[1457]: time="2026-01-20T00:44:18.309184969Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:44:18.309514 containerd[1457]: time="2026-01-20T00:44:18.309337133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:44:18.309514 containerd[1457]: time="2026-01-20T00:44:18.309463438Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:44:18.312029 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:44:18.319644 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:44:18.320126 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:44:18.327362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346052126Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346167841Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346192788Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346211853Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346247350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346467712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.346765397Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347037986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347061651Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347084173Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347104010Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347121863Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347136660Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.347923 containerd[1457]: time="2026-01-20T00:44:18.347153843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347171125Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347187235Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347216039Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347234864Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347259971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347276542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347291730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347306457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347324952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347342044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347356511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347371619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347391376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.348571 containerd[1457]: time="2026-01-20T00:44:18.347412045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347428726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347445076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347538000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347563318Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347586451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347597521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347607289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347687469Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347706034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347716062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347726663Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347735198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347746289Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:44:18.349195 containerd[1457]: time="2026-01-20T00:44:18.347759193Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:44:18.349588 containerd[1457]: time="2026-01-20T00:44:18.347768110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:44:18.349619 containerd[1457]: time="2026-01-20T00:44:18.348145194Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:44:18.349619 containerd[1457]: time="2026-01-20T00:44:18.348197792Z" level=info msg="Connect containerd service" Jan 20 00:44:18.349619 containerd[1457]: time="2026-01-20T00:44:18.348237657Z" level=info msg="using legacy CRI server" Jan 20 00:44:18.349619 containerd[1457]: time="2026-01-20T00:44:18.348244790Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:44:18.349619 containerd[1457]: time="2026-01-20T00:44:18.348461204Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:44:18.351128 containerd[1457]: time="2026-01-20T00:44:18.350884299Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:44:18.352398 containerd[1457]: time="2026-01-20T00:44:18.352290415Z" level=info msg="Start subscribing containerd event" Jan 20 00:44:18.352398 containerd[1457]: time="2026-01-20T00:44:18.352372518Z" level=info msg="Start recovering state" Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.352472624Z" level=info msg="Start event monitor" Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.352649705Z" level=info msg="Start snapshots syncer" Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.352664353Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.352680853Z" level=info msg="Start streaming server" Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.353267892Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:44:18.353476 containerd[1457]: time="2026-01-20T00:44:18.353391272Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:44:18.353699 containerd[1457]: time="2026-01-20T00:44:18.353533527Z" level=info msg="containerd successfully booted in 0.128219s" Jan 20 00:44:18.354288 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:44:18.458695 sshd[1516]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:18.465425 sshd[1516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:18.490533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:44:18.513782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:44:18.548097 systemd-logind[1439]: New session 1 of user core. Jan 20 00:44:18.609433 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:44:18.676413 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:44:18.780679 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:44:19.081120 systemd[1535]: Queued start job for default target default.target. Jan 20 00:44:19.171623 systemd[1535]: Created slice app.slice - User Application Slice. Jan 20 00:44:19.171702 systemd[1535]: Reached target paths.target - Paths. Jan 20 00:44:19.171722 systemd[1535]: Reached target timers.target - Timers. Jan 20 00:44:19.714770 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:44:19.754557 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:44:19.754926 systemd[1535]: Reached target sockets.target - Sockets. Jan 20 00:44:19.754954 systemd[1535]: Reached target basic.target - Basic System. Jan 20 00:44:19.755109 systemd[1535]: Reached target default.target - Main User Target. Jan 20 00:44:19.755170 systemd[1535]: Startup finished in 929ms. Jan 20 00:44:19.755879 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:44:19.774088 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:44:19.883365 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:47958.service - OpenSSH per-connection server daemon (10.0.0.1:47958). Jan 20 00:44:20.145309 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 47958 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:20.152342 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:20.164956 systemd-logind[1439]: New session 2 of user core. Jan 20 00:44:20.173195 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:44:20.289714 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:20.306895 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:47958.service: Deactivated successfully. Jan 20 00:44:20.309265 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:44:20.311962 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:44:20.324466 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:47968.service - OpenSSH per-connection server daemon (10.0.0.1:47968). Jan 20 00:44:20.352148 systemd-logind[1439]: Removed session 2. Jan 20 00:44:20.395049 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 47968 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:20.399695 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:20.407943 systemd-logind[1439]: New session 3 of user core. Jan 20 00:44:20.433760 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:44:20.520156 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:20.535359 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:47968.service: Deactivated successfully. Jan 20 00:44:20.545556 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:44:20.551665 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:44:20.554225 systemd-logind[1439]: Removed session 3. Jan 20 00:44:22.546735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:44:22.550351 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:44:22.550722 systemd[1]: Startup finished in 1.862s (kernel) + 7.665s (initrd) + 12.574s (userspace) = 22.102s. Jan 20 00:44:22.555766 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:44:25.894175 kubelet[1568]: E0120 00:44:25.893573 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:44:25.900398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:44:25.900690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:44:25.901275 systemd[1]: kubelet.service: Consumed 6.765s CPU time. Jan 20 00:44:30.539592 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:45784.service - OpenSSH per-connection server daemon (10.0.0.1:45784). Jan 20 00:44:30.587271 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 45784 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:30.589295 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:30.594856 systemd-logind[1439]: New session 4 of user core. Jan 20 00:44:30.605029 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:44:30.664238 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:30.678532 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:45784.service: Deactivated successfully. Jan 20 00:44:30.680241 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:44:30.681595 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:44:30.693162 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:45790.service - OpenSSH per-connection server daemon (10.0.0.1:45790). Jan 20 00:44:30.694373 systemd-logind[1439]: Removed session 4. Jan 20 00:44:30.728387 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 45790 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:30.730378 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:30.745123 systemd-logind[1439]: New session 5 of user core. Jan 20 00:44:30.757064 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:44:30.811537 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:30.834140 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:45790.service: Deactivated successfully. Jan 20 00:44:30.839430 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:44:30.841107 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:44:30.855049 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:45792.service - OpenSSH per-connection server daemon (10.0.0.1:45792). Jan 20 00:44:30.856399 systemd-logind[1439]: Removed session 5. Jan 20 00:44:30.896195 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 45792 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:30.899297 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:30.906552 systemd-logind[1439]: New session 6 of user core. Jan 20 00:44:30.920096 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:44:30.988407 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:30.997197 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:45792.service: Deactivated successfully. Jan 20 00:44:31.001388 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:44:31.003552 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:44:31.015297 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:45806.service - OpenSSH per-connection server daemon (10.0.0.1:45806). Jan 20 00:44:31.016758 systemd-logind[1439]: Removed session 6. Jan 20 00:44:31.058418 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 45806 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:31.060639 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:31.067026 systemd-logind[1439]: New session 7 of user core. Jan 20 00:44:31.082140 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:44:31.168535 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:44:31.169262 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:44:31.190372 sudo[1603]: pam_unix(sudo:session): session closed for user root Jan 20 00:44:31.194314 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:31.203083 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:45806.service: Deactivated successfully. Jan 20 00:44:31.205585 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:44:31.207785 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:44:31.220625 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:45820.service - OpenSSH per-connection server daemon (10.0.0.1:45820). Jan 20 00:44:31.222038 systemd-logind[1439]: Removed session 7. Jan 20 00:44:31.269613 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 45820 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:31.271900 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:31.277628 systemd-logind[1439]: New session 8 of user core. Jan 20 00:44:31.293031 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:44:31.359598 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:44:31.360077 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:44:31.365115 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 20 00:44:31.372646 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:44:31.373123 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:44:31.399167 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:44:31.402750 auditctl[1615]: No rules Jan 20 00:44:31.404084 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:44:31.404365 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:44:31.406600 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:44:31.491580 augenrules[1633]: No rules Jan 20 00:44:31.493534 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:44:31.494960 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 20 00:44:31.497467 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:31.513683 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:45820.service: Deactivated successfully. Jan 20 00:44:31.515449 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:44:31.517116 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:44:31.535259 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:45828.service - OpenSSH per-connection server daemon (10.0.0.1:45828). Jan 20 00:44:31.537077 systemd-logind[1439]: Removed session 8. Jan 20 00:44:31.575334 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 45828 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:44:31.576714 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:44:31.582434 systemd-logind[1439]: New session 9 of user core. Jan 20 00:44:31.592168 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:44:31.650433 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:44:31.650878 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:44:31.678332 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:44:31.705465 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:44:31.705744 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:44:33.678438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:44:33.678676 systemd[1]: kubelet.service: Consumed 6.765s CPU time. Jan 20 00:44:33.689310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:44:33.906446 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit session-9.scope)... Jan 20 00:44:33.906493 systemd[1]: Reloading... Jan 20 00:44:34.014035 zram_generator::config[1723]: No configuration found. Jan 20 00:44:34.288865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:44:34.430444 systemd[1]: Reloading finished in 523 ms. Jan 20 00:44:34.504265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:44:34.510074 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:44:34.511196 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:44:34.511925 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:44:34.512404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:44:34.517068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:44:34.709437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:44:34.729439 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:44:34.846701 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:44:34.846701 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:44:34.847177 kubelet[1775]: I0120 00:44:34.846948 1775 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:44:35.360056 kubelet[1775]: I0120 00:44:35.359725 1775 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 00:44:35.360056 kubelet[1775]: I0120 00:44:35.359910 1775 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:44:35.360407 kubelet[1775]: I0120 00:44:35.360255 1775 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 00:44:35.360407 kubelet[1775]: I0120 00:44:35.360281 1775 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:44:35.360769 kubelet[1775]: I0120 00:44:35.360692 1775 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:44:35.367558 kubelet[1775]: I0120 00:44:35.367399 1775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:44:35.376986 kubelet[1775]: E0120 00:44:35.376868 1775 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:44:35.377091 kubelet[1775]: I0120 00:44:35.377076 1775 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 20 00:44:35.386081 kubelet[1775]: I0120 00:44:35.385877 1775 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 00:44:35.388670 kubelet[1775]: I0120 00:44:35.388524 1775 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:44:35.389469 kubelet[1775]: I0120 00:44:35.388643 1775 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:44:35.389675 kubelet[1775]: I0120 00:44:35.389527 1775 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:44:35.389675 kubelet[1775]: I0120 00:44:35.389542 1775 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 00:44:35.389864 kubelet[1775]: I0120 00:44:35.389770 1775 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 00:44:35.553350 kubelet[1775]: I0120 00:44:35.553169 1775 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:44:35.556220 kubelet[1775]: I0120 00:44:35.556114 1775 kubelet.go:475] "Attempting to sync node with API server" Jan 20 00:44:35.556469 kubelet[1775]: I0120 00:44:35.556267 1775 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:44:35.556586 kubelet[1775]: I0120 00:44:35.556541 1775 kubelet.go:387] "Adding apiserver pod source" Jan 20 00:44:35.556871 kubelet[1775]: I0120 00:44:35.556737 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:44:35.557268 kubelet[1775]: E0120 00:44:35.556902 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:35.557268 kubelet[1775]: E0120 00:44:35.557101 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:35.562679 kubelet[1775]: I0120 00:44:35.562580 1775 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:44:35.564902 kubelet[1775]: I0120 00:44:35.564714 1775 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:44:35.564902 kubelet[1775]: I0120 00:44:35.564893 1775 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 00:44:35.567962 kubelet[1775]: W0120 00:44:35.567466 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:44:35.575849 kubelet[1775]: I0120 00:44:35.574586 1775 server.go:1262] "Started kubelet" Jan 20 00:44:35.575849 kubelet[1775]: I0120 00:44:35.575343 1775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:44:35.580982 kubelet[1775]: I0120 00:44:35.580786 1775 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:44:35.581233 kubelet[1775]: I0120 00:44:35.581119 1775 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 00:44:35.581736 kubelet[1775]: I0120 00:44:35.581654 1775 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:44:35.582746 kubelet[1775]: I0120 00:44:35.582627 1775 server.go:310] "Adding debug handlers to kubelet server" Jan 20 00:44:35.585762 kubelet[1775]: I0120 00:44:35.585701 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:44:35.585891 kubelet[1775]: I0120 00:44:35.585762 1775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:44:35.586670 kubelet[1775]: I0120 00:44:35.586585 1775 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 00:44:35.588080 kubelet[1775]: I0120 00:44:35.588046 1775 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 00:44:35.588338 kubelet[1775]: E0120 00:44:35.588254 1775 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.96\" not found" Jan 20 00:44:35.588472 kubelet[1775]: I0120 00:44:35.588424 1775 reconciler.go:29] "Reconciler: start to sync state" Jan 20 00:44:35.592549 kubelet[1775]: E0120 00:44:35.591905 1775 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:44:35.595090 kubelet[1775]: I0120 00:44:35.595044 1775 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:44:35.595090 kubelet[1775]: I0120 00:44:35.595088 1775 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:44:35.595715 kubelet[1775]: I0120 00:44:35.595280 1775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:44:35.595715 kubelet[1775]: E0120 00:44:35.595381 1775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.96\" not found" node="10.0.0.96" Jan 20 00:44:35.616342 kubelet[1775]: I0120 00:44:35.615034 1775 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:44:35.616342 kubelet[1775]: I0120 00:44:35.615068 1775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:44:35.616342 kubelet[1775]: I0120 00:44:35.615105 1775 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:44:35.620784 kubelet[1775]: I0120 00:44:35.620303 1775 policy_none.go:49] "None policy: Start" Jan 20 00:44:35.620784 kubelet[1775]: I0120 00:44:35.620376 1775 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 00:44:35.620784 kubelet[1775]: I0120 00:44:35.620399 1775 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 00:44:35.623211 kubelet[1775]: I0120 00:44:35.623188 1775 policy_none.go:47] "Start" Jan 20 00:44:35.645475 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:44:35.653098 kubelet[1775]: I0120 00:44:35.653031 1775 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 00:44:35.660491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:44:35.665228 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:44:35.677702 kubelet[1775]: E0120 00:44:35.677625 1775 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:44:35.678311 kubelet[1775]: I0120 00:44:35.678188 1775 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:44:35.678311 kubelet[1775]: I0120 00:44:35.678237 1775 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:44:35.679295 kubelet[1775]: I0120 00:44:35.678921 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:44:35.680934 kubelet[1775]: E0120 00:44:35.680719 1775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:44:35.680980 kubelet[1775]: E0120 00:44:35.680934 1775 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.96\" not found" Jan 20 00:44:35.713882 kubelet[1775]: I0120 00:44:35.713756 1775 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 00:44:35.714052 kubelet[1775]: I0120 00:44:35.713953 1775 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 00:44:35.714200 kubelet[1775]: I0120 00:44:35.714147 1775 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 00:44:35.714592 kubelet[1775]: E0120 00:44:35.714327 1775 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 20 00:44:35.739691 sudo[1644]: pam_unix(sudo:session): session closed for user root Jan 20 00:44:35.743076 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 20 00:44:35.748377 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:45828.service: Deactivated successfully. Jan 20 00:44:35.750699 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:44:35.751091 systemd[1]: session-9.scope: Consumed 2.485s CPU time, 77.8M memory peak, 0B memory swap peak. Jan 20 00:44:35.751881 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:44:35.755388 systemd-logind[1439]: Removed session 9. Jan 20 00:44:35.780504 kubelet[1775]: I0120 00:44:35.780336 1775 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.96" Jan 20 00:44:35.785541 kubelet[1775]: I0120 00:44:35.785331 1775 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.96" Jan 20 00:44:35.803974 kubelet[1775]: I0120 00:44:35.803898 1775 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 20 00:44:35.805589 containerd[1457]: time="2026-01-20T00:44:35.804676719Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:44:35.806331 kubelet[1775]: I0120 00:44:35.805172 1775 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 20 00:44:36.364572 kubelet[1775]: I0120 00:44:36.364478 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 00:44:36.365541 kubelet[1775]: I0120 00:44:36.364758 1775 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 00:44:36.365541 kubelet[1775]: I0120 00:44:36.364788 1775 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 00:44:36.365541 kubelet[1775]: I0120 00:44:36.364924 1775 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 00:44:36.558050 kubelet[1775]: E0120 00:44:36.557899 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:36.558050 kubelet[1775]: I0120 00:44:36.558039 1775 apiserver.go:52] "Watching apiserver" Jan 20 00:44:36.581221 systemd[1]: Created slice kubepods-burstable-podf592f3a2_5321_4edb_b858_4d24a7f109b3.slice - libcontainer container kubepods-burstable-podf592f3a2_5321_4edb_b858_4d24a7f109b3.slice. Jan 20 00:44:36.590031 kubelet[1775]: I0120 00:44:36.589892 1775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 00:44:36.606518 systemd[1]: Created slice kubepods-besteffort-pode7d66e25_d45b_4f49_890a_427e9b45ea12.slice - libcontainer container kubepods-besteffort-pode7d66e25_d45b_4f49_890a_427e9b45ea12.slice. Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610702 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-run\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610754 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-hostproc\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610775 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-config-path\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610846 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7d66e25-d45b-4f49-890a-427e9b45ea12-xtables-lock\") pod \"kube-proxy-qnqrs\" (UID: \"e7d66e25-d45b-4f49-890a-427e9b45ea12\") " pod="kube-system/kube-proxy-qnqrs" Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610866 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhkz\" (UniqueName: \"kubernetes.io/projected/e7d66e25-d45b-4f49-890a-427e9b45ea12-kube-api-access-jmhkz\") pod \"kube-proxy-qnqrs\" (UID: \"e7d66e25-d45b-4f49-890a-427e9b45ea12\") " pod="kube-system/kube-proxy-qnqrs" Jan 20 00:44:36.610878 kubelet[1775]: I0120 00:44:36.610880 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-bpf-maps\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611215 kubelet[1775]: I0120 00:44:36.610892 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-etc-cni-netd\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611215 kubelet[1775]: I0120 00:44:36.610904 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-lib-modules\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611215 kubelet[1775]: I0120 00:44:36.611078 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7d66e25-d45b-4f49-890a-427e9b45ea12-kube-proxy\") pod \"kube-proxy-qnqrs\" (UID: \"e7d66e25-d45b-4f49-890a-427e9b45ea12\") " pod="kube-system/kube-proxy-qnqrs" Jan 20 00:44:36.611215 kubelet[1775]: I0120 00:44:36.611142 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-cgroup\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611215 kubelet[1775]: I0120 00:44:36.611179 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cni-path\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611493 kubelet[1775]: I0120 00:44:36.611362 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f592f3a2-5321-4edb-b858-4d24a7f109b3-clustermesh-secrets\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611493 kubelet[1775]: I0120 00:44:36.611487 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-hubble-tls\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611579 kubelet[1775]: I0120 00:44:36.611517 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7d66e25-d45b-4f49-890a-427e9b45ea12-lib-modules\") pod \"kube-proxy-qnqrs\" (UID: \"e7d66e25-d45b-4f49-890a-427e9b45ea12\") " pod="kube-system/kube-proxy-qnqrs" Jan 20 00:44:36.611579 kubelet[1775]: I0120 00:44:36.611538 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-xtables-lock\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611579 kubelet[1775]: I0120 00:44:36.611562 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-net\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611690 kubelet[1775]: I0120 00:44:36.611582 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-kernel\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.611690 kubelet[1775]: I0120 00:44:36.611642 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdkg\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-kube-api-access-hbdkg\") pod \"cilium-lr7hz\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " pod="kube-system/cilium-lr7hz" Jan 20 00:44:36.907507 kubelet[1775]: E0120 00:44:36.907411 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:36.909384 containerd[1457]: time="2026-01-20T00:44:36.909259766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lr7hz,Uid:f592f3a2-5321-4edb-b858-4d24a7f109b3,Namespace:kube-system,Attempt:0,}" Jan 20 00:44:36.920547 kubelet[1775]: E0120 00:44:36.920468 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:36.921443 containerd[1457]: time="2026-01-20T00:44:36.921154419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnqrs,Uid:e7d66e25-d45b-4f49-890a-427e9b45ea12,Namespace:kube-system,Attempt:0,}" Jan 20 00:44:37.439784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307046129.mount: Deactivated successfully. Jan 20 00:44:37.447208 containerd[1457]: time="2026-01-20T00:44:37.447052724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:44:37.449263 containerd[1457]: time="2026-01-20T00:44:37.449095215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:44:37.450244 containerd[1457]: time="2026-01-20T00:44:37.450177178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:44:37.451350 containerd[1457]: time="2026-01-20T00:44:37.451315120Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:44:37.452322 containerd[1457]: time="2026-01-20T00:44:37.452245997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:44:37.454851 containerd[1457]: time="2026-01-20T00:44:37.454731309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:44:37.455587 containerd[1457]: time="2026-01-20T00:44:37.455521796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.094408ms" Jan 20 00:44:37.457940 containerd[1457]: time="2026-01-20T00:44:37.457898592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.036803ms" Jan 20 00:44:37.542411 containerd[1457]: time="2026-01-20T00:44:37.541974992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:44:37.542411 containerd[1457]: time="2026-01-20T00:44:37.542176789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:44:37.542411 containerd[1457]: time="2026-01-20T00:44:37.542191837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:37.543071 containerd[1457]: time="2026-01-20T00:44:37.542764246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:37.543285 containerd[1457]: time="2026-01-20T00:44:37.543104424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:44:37.543285 containerd[1457]: time="2026-01-20T00:44:37.543217134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:44:37.543285 containerd[1457]: time="2026-01-20T00:44:37.543246159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:37.543426 containerd[1457]: time="2026-01-20T00:44:37.543361013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:37.558214 kubelet[1775]: E0120 00:44:37.558158 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:37.605050 systemd[1]: Started cri-containerd-19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52.scope - libcontainer container 19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52. Jan 20 00:44:37.607919 systemd[1]: Started cri-containerd-d8d454cfd8cd92267e857937a31160ab7e6c5bf9fdeb6ee4afeb9682a622ca2b.scope - libcontainer container d8d454cfd8cd92267e857937a31160ab7e6c5bf9fdeb6ee4afeb9682a622ca2b. Jan 20 00:44:37.646267 containerd[1457]: time="2026-01-20T00:44:37.646211849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lr7hz,Uid:f592f3a2-5321-4edb-b858-4d24a7f109b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\"" Jan 20 00:44:37.646553 containerd[1457]: time="2026-01-20T00:44:37.646402362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnqrs,Uid:e7d66e25-d45b-4f49-890a-427e9b45ea12,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8d454cfd8cd92267e857937a31160ab7e6c5bf9fdeb6ee4afeb9682a622ca2b\"" Jan 20 00:44:37.648323 kubelet[1775]: E0120 00:44:37.648204 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:37.649366 kubelet[1775]: E0120 00:44:37.649298 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:37.650300 containerd[1457]: time="2026-01-20T00:44:37.650118636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 00:44:38.559368 kubelet[1775]: E0120 00:44:38.559278 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:38.593529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746289394.mount: Deactivated successfully. Jan 20 00:44:38.855228 containerd[1457]: time="2026-01-20T00:44:38.854984363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:44:38.856090 containerd[1457]: time="2026-01-20T00:44:38.855988461Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 20 00:44:38.857145 containerd[1457]: time="2026-01-20T00:44:38.857086258Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:44:38.859598 containerd[1457]: time="2026-01-20T00:44:38.859535534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:44:38.860431 containerd[1457]: time="2026-01-20T00:44:38.860362216Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.210174862s" Jan 20 00:44:38.860474 containerd[1457]: time="2026-01-20T00:44:38.860436685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 00:44:38.862666 containerd[1457]: time="2026-01-20T00:44:38.862518703Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 00:44:38.867877 containerd[1457]: time="2026-01-20T00:44:38.867738314Z" level=info msg="CreateContainer within sandbox \"d8d454cfd8cd92267e857937a31160ab7e6c5bf9fdeb6ee4afeb9682a622ca2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:44:38.886624 containerd[1457]: time="2026-01-20T00:44:38.886554914Z" level=info msg="CreateContainer within sandbox \"d8d454cfd8cd92267e857937a31160ab7e6c5bf9fdeb6ee4afeb9682a622ca2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78\"" Jan 20 00:44:38.887517 containerd[1457]: time="2026-01-20T00:44:38.887469281Z" level=info msg="StartContainer for \"0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78\"" Jan 20 00:44:38.918151 systemd[1]: run-containerd-runc-k8s.io-0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78-runc.mb1ykq.mount: Deactivated successfully. Jan 20 00:44:38.924072 systemd[1]: Started cri-containerd-0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78.scope - libcontainer container 0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78. Jan 20 00:44:38.960371 containerd[1457]: time="2026-01-20T00:44:38.960146608Z" level=info msg="StartContainer for \"0102bcaf40333a84493823c4f6437132c4a0fee3a6ae99bcbbefe4eedbfc3e78\" returns successfully" Jan 20 00:44:39.561205 kubelet[1775]: E0120 00:44:39.560911 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:39.745924 kubelet[1775]: E0120 00:44:39.745637 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:39.765664 kubelet[1775]: I0120 00:44:39.765421 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnqrs" podStartSLOduration=3.55257022 podStartE2EDuration="4.765362865s" podCreationTimestamp="2026-01-20 00:44:35 +0000 UTC" firstStartedPulling="2026-01-20 00:44:37.649402509 +0000 UTC m=+2.893286548" lastFinishedPulling="2026-01-20 00:44:38.862195144 +0000 UTC m=+4.106079193" observedRunningTime="2026-01-20 00:44:39.764873214 +0000 UTC m=+5.008757283" watchObservedRunningTime="2026-01-20 00:44:39.765362865 +0000 UTC m=+5.009246914" Jan 20 00:44:40.561589 kubelet[1775]: E0120 00:44:40.561463 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:40.749906 kubelet[1775]: E0120 00:44:40.749577 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:41.562320 kubelet[1775]: E0120 00:44:41.562266 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:42.563901 kubelet[1775]: E0120 00:44:42.563650 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:43.565431 kubelet[1775]: E0120 00:44:43.565256 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:44.648342 kubelet[1775]: E0120 00:44:44.647756 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:44.903339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698052735.mount: Deactivated successfully. Jan 20 00:44:45.648885 kubelet[1775]: E0120 00:44:45.648750 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:46.649637 kubelet[1775]: E0120 00:44:46.649474 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:46.843339 containerd[1457]: time="2026-01-20T00:44:46.843177805Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:44:46.844882 containerd[1457]: time="2026-01-20T00:44:46.844667326Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 00:44:46.846178 containerd[1457]: time="2026-01-20T00:44:46.846010043Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:44:46.848445 containerd[1457]: time="2026-01-20T00:44:46.848332781Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.985735792s" Jan 20 00:44:46.848445 containerd[1457]: time="2026-01-20T00:44:46.848400718Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 00:44:46.854933 containerd[1457]: time="2026-01-20T00:44:46.854848421Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:44:46.874559 containerd[1457]: time="2026-01-20T00:44:46.874460751Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\"" Jan 20 00:44:46.875667 containerd[1457]: time="2026-01-20T00:44:46.875488307Z" level=info msg="StartContainer for \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\"" Jan 20 00:44:46.925134 systemd[1]: Started cri-containerd-ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d.scope - libcontainer container ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d. Jan 20 00:44:46.959505 containerd[1457]: time="2026-01-20T00:44:46.959339369Z" level=info msg="StartContainer for \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\" returns successfully" Jan 20 00:44:46.974967 systemd[1]: cri-containerd-ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d.scope: Deactivated successfully. Jan 20 00:44:47.210722 containerd[1457]: time="2026-01-20T00:44:47.210423053Z" level=info msg="shim disconnected" id=ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d namespace=k8s.io Jan 20 00:44:47.210722 containerd[1457]: time="2026-01-20T00:44:47.210483093Z" level=warning msg="cleaning up after shim disconnected" id=ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d namespace=k8s.io Jan 20 00:44:47.210722 containerd[1457]: time="2026-01-20T00:44:47.210530150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:44:47.650534 kubelet[1775]: E0120 00:44:47.650401 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:47.866363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d-rootfs.mount: Deactivated successfully. Jan 20 00:44:47.889976 kubelet[1775]: E0120 00:44:47.889911 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:47.897039 containerd[1457]: time="2026-01-20T00:44:47.896775907Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:44:47.917439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount391316476.mount: Deactivated successfully. Jan 20 00:44:47.921909 containerd[1457]: time="2026-01-20T00:44:47.921743601Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\"" Jan 20 00:44:47.925045 containerd[1457]: time="2026-01-20T00:44:47.922599243Z" level=info msg="StartContainer for \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\"" Jan 20 00:44:47.964032 systemd[1]: Started cri-containerd-b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9.scope - libcontainer container b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9. Jan 20 00:44:47.995930 containerd[1457]: time="2026-01-20T00:44:47.995149418Z" level=info msg="StartContainer for \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\" returns successfully" Jan 20 00:44:48.010445 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:44:48.010903 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:44:48.011048 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:44:48.018256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:44:48.018605 systemd[1]: cri-containerd-b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9.scope: Deactivated successfully. Jan 20 00:44:48.047334 containerd[1457]: time="2026-01-20T00:44:48.047148634Z" level=info msg="shim disconnected" id=b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9 namespace=k8s.io Jan 20 00:44:48.047334 containerd[1457]: time="2026-01-20T00:44:48.047221768Z" level=warning msg="cleaning up after shim disconnected" id=b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9 namespace=k8s.io Jan 20 00:44:48.047334 containerd[1457]: time="2026-01-20T00:44:48.047231396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:44:48.047675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:44:48.651630 kubelet[1775]: E0120 00:44:48.651530 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:48.866356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9-rootfs.mount: Deactivated successfully. Jan 20 00:44:48.893881 kubelet[1775]: E0120 00:44:48.893856 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:48.900118 containerd[1457]: time="2026-01-20T00:44:48.899894645Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:44:48.922233 containerd[1457]: time="2026-01-20T00:44:48.921769316Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\"" Jan 20 00:44:48.923167 containerd[1457]: time="2026-01-20T00:44:48.923058793Z" level=info msg="StartContainer for \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\"" Jan 20 00:44:48.970115 systemd[1]: Started cri-containerd-cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38.scope - libcontainer container cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38. Jan 20 00:44:49.008101 containerd[1457]: time="2026-01-20T00:44:49.006902867Z" level=info msg="StartContainer for \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\" returns successfully" Jan 20 00:44:49.012572 systemd[1]: cri-containerd-cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38.scope: Deactivated successfully. Jan 20 00:44:49.045929 containerd[1457]: time="2026-01-20T00:44:49.045778828Z" level=info msg="shim disconnected" id=cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38 namespace=k8s.io Jan 20 00:44:49.045929 containerd[1457]: time="2026-01-20T00:44:49.045899139Z" level=warning msg="cleaning up after shim disconnected" id=cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38 namespace=k8s.io Jan 20 00:44:49.045929 containerd[1457]: time="2026-01-20T00:44:49.045910050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:44:49.066568 containerd[1457]: time="2026-01-20T00:44:49.066396689Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:44:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:44:49.652139 kubelet[1775]: E0120 00:44:49.651906 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:49.867106 systemd[1]: run-containerd-runc-k8s.io-cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38-runc.HwVXVa.mount: Deactivated successfully. Jan 20 00:44:49.867298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38-rootfs.mount: Deactivated successfully. Jan 20 00:44:49.899323 kubelet[1775]: E0120 00:44:49.899159 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:49.906317 containerd[1457]: time="2026-01-20T00:44:49.906138817Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:44:49.927378 containerd[1457]: time="2026-01-20T00:44:49.927277104Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\"" Jan 20 00:44:49.928415 containerd[1457]: time="2026-01-20T00:44:49.928290614Z" level=info msg="StartContainer for \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\"" Jan 20 00:44:49.973152 systemd[1]: Started cri-containerd-d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c.scope - libcontainer container d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c. Jan 20 00:44:50.004782 systemd[1]: cri-containerd-d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c.scope: Deactivated successfully. Jan 20 00:44:50.009671 containerd[1457]: time="2026-01-20T00:44:50.009565458Z" level=info msg="StartContainer for \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\" returns successfully" Jan 20 00:44:50.044080 containerd[1457]: time="2026-01-20T00:44:50.044009099Z" level=info msg="shim disconnected" id=d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c namespace=k8s.io Jan 20 00:44:50.044420 containerd[1457]: time="2026-01-20T00:44:50.044351370Z" level=warning msg="cleaning up after shim disconnected" id=d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c namespace=k8s.io Jan 20 00:44:50.044420 containerd[1457]: time="2026-01-20T00:44:50.044413515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:44:50.652725 kubelet[1775]: E0120 00:44:50.652528 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:50.867281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c-rootfs.mount: Deactivated successfully. Jan 20 00:44:50.904489 kubelet[1775]: E0120 00:44:50.904360 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:50.910360 containerd[1457]: time="2026-01-20T00:44:50.910278156Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:44:50.932296 containerd[1457]: time="2026-01-20T00:44:50.932124004Z" level=info msg="CreateContainer within sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\"" Jan 20 00:44:50.933234 containerd[1457]: time="2026-01-20T00:44:50.933135119Z" level=info msg="StartContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\"" Jan 20 00:44:50.983086 systemd[1]: Started cri-containerd-54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20.scope - libcontainer container 54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20. Jan 20 00:44:51.024613 containerd[1457]: time="2026-01-20T00:44:51.024490960Z" level=info msg="StartContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" returns successfully" Jan 20 00:44:51.126368 kubelet[1775]: I0120 00:44:51.126285 1775 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 00:44:51.606893 kernel: Initializing XFRM netlink socket Jan 20 00:44:51.653600 kubelet[1775]: E0120 00:44:51.653488 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:51.867770 systemd[1]: run-containerd-runc-k8s.io-54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20-runc.LxgUG8.mount: Deactivated successfully. Jan 20 00:44:51.911871 kubelet[1775]: E0120 00:44:51.911749 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:51.928402 kubelet[1775]: I0120 00:44:51.928259 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lr7hz" podStartSLOduration=7.729185669 podStartE2EDuration="16.92824102s" podCreationTimestamp="2026-01-20 00:44:35 +0000 UTC" firstStartedPulling="2026-01-20 00:44:37.650610903 +0000 UTC m=+2.894494942" lastFinishedPulling="2026-01-20 00:44:46.849666254 +0000 UTC m=+12.093550293" observedRunningTime="2026-01-20 00:44:51.927320691 +0000 UTC m=+17.171204740" watchObservedRunningTime="2026-01-20 00:44:51.92824102 +0000 UTC m=+17.172125059" Jan 20 00:44:52.159550 systemd[1]: Created slice kubepods-besteffort-pod30499015_a029_47fc_8252_e865496ef3f0.slice - libcontainer container kubepods-besteffort-pod30499015_a029_47fc_8252_e865496ef3f0.slice. Jan 20 00:44:52.211045 kubelet[1775]: I0120 00:44:52.210785 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngbv\" (UniqueName: \"kubernetes.io/projected/30499015-a029-47fc-8252-e865496ef3f0-kube-api-access-dngbv\") pod \"nginx-deployment-bb8f74bfb-qxhmp\" (UID: \"30499015-a029-47fc-8252-e865496ef3f0\") " pod="default/nginx-deployment-bb8f74bfb-qxhmp" Jan 20 00:44:52.468160 containerd[1457]: time="2026-01-20T00:44:52.467969382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-qxhmp,Uid:30499015-a029-47fc-8252-e865496ef3f0,Namespace:default,Attempt:0,}" Jan 20 00:44:52.654023 kubelet[1775]: E0120 00:44:52.653936 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:52.916432 kubelet[1775]: E0120 00:44:52.916323 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:53.345701 systemd-networkd[1391]: cilium_host: Link UP Jan 20 00:44:53.346027 systemd-networkd[1391]: cilium_net: Link UP Jan 20 00:44:53.347538 systemd-networkd[1391]: cilium_net: Gained carrier Jan 20 00:44:53.348772 systemd-networkd[1391]: cilium_host: Gained carrier Jan 20 00:44:53.349262 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 20 00:44:53.349449 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 20 00:44:53.493458 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 20 00:44:53.493473 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 20 00:44:53.655129 kubelet[1775]: E0120 00:44:53.654952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:53.763921 kernel: NET: Registered PF_ALG protocol family Jan 20 00:44:53.918954 kubelet[1775]: E0120 00:44:53.918619 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:54.588710 systemd-networkd[1391]: lxc_health: Link UP Jan 20 00:44:54.608149 systemd-networkd[1391]: lxc_health: Gained carrier Jan 20 00:44:54.656026 kubelet[1775]: E0120 00:44:54.655942 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:54.985374 kubelet[1775]: E0120 00:44:54.984774 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:55.024000 systemd-networkd[1391]: lxcd74789cb953a: Link UP Jan 20 00:44:55.038906 kernel: eth0: renamed from tmpe50ac Jan 20 00:44:55.042228 systemd-networkd[1391]: lxcd74789cb953a: Gained carrier Jan 20 00:44:55.103218 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 20 00:44:55.558895 kubelet[1775]: E0120 00:44:55.557201 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:55.657271 kubelet[1775]: E0120 00:44:55.657180 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:55.931467 kubelet[1775]: E0120 00:44:55.931243 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:56.319230 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 20 00:44:56.447316 systemd-networkd[1391]: lxcd74789cb953a: Gained IPv6LL Jan 20 00:44:56.657758 kubelet[1775]: E0120 00:44:56.657638 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:56.925686 kubelet[1775]: E0120 00:44:56.925447 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:44:57.658484 kubelet[1775]: E0120 00:44:57.658418 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:58.659107 kubelet[1775]: E0120 00:44:58.658992 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:44:59.060402 containerd[1457]: time="2026-01-20T00:44:59.058345237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:44:59.062870 containerd[1457]: time="2026-01-20T00:44:59.062681576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:44:59.062870 containerd[1457]: time="2026-01-20T00:44:59.062760462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:59.063165 containerd[1457]: time="2026-01-20T00:44:59.062981733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:44:59.105214 systemd[1]: Started cri-containerd-e50acc7fcda1dd1b159c60f17ca38b89f9058bdd56860424a8d24743f3ca3d7a.scope - libcontainer container e50acc7fcda1dd1b159c60f17ca38b89f9058bdd56860424a8d24743f3ca3d7a. Jan 20 00:44:59.121573 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:44:59.203228 containerd[1457]: time="2026-01-20T00:44:59.202686442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-qxhmp,Uid:30499015-a029-47fc-8252-e865496ef3f0,Namespace:default,Attempt:0,} returns sandbox id \"e50acc7fcda1dd1b159c60f17ca38b89f9058bdd56860424a8d24743f3ca3d7a\"" Jan 20 00:44:59.207680 containerd[1457]: time="2026-01-20T00:44:59.207589549Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:44:59.662877 kubelet[1775]: E0120 00:44:59.662082 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:00.451587 update_engine[1444]: I20260120 00:45:00.449645 1444 update_attempter.cc:509] Updating boot flags... Jan 20 00:45:00.608517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2912) Jan 20 00:45:00.670662 kubelet[1775]: E0120 00:45:00.670016 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:00.884006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2915) Jan 20 00:45:01.007899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2915) Jan 20 00:45:01.671985 kubelet[1775]: E0120 00:45:01.671485 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:02.673642 kubelet[1775]: E0120 00:45:02.673255 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:03.676290 kubelet[1775]: E0120 00:45:03.675945 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:04.678550 kubelet[1775]: E0120 00:45:04.678262 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:04.694488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704652902.mount: Deactivated successfully. Jan 20 00:45:05.894983 kubelet[1775]: E0120 00:45:05.894839 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:06.895981 kubelet[1775]: E0120 00:45:06.895682 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:07.093498 containerd[1457]: time="2026-01-20T00:45:07.092669752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:07.096997 containerd[1457]: time="2026-01-20T00:45:07.096863479Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 20 00:45:07.099178 containerd[1457]: time="2026-01-20T00:45:07.099105001Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:07.107106 containerd[1457]: time="2026-01-20T00:45:07.106956234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:07.114491 containerd[1457]: time="2026-01-20T00:45:07.114356028Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 7.906698284s" Jan 20 00:45:07.115184 containerd[1457]: time="2026-01-20T00:45:07.114549037Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:45:07.124684 containerd[1457]: time="2026-01-20T00:45:07.124489836Z" level=info msg="CreateContainer within sandbox \"e50acc7fcda1dd1b159c60f17ca38b89f9058bdd56860424a8d24743f3ca3d7a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 20 00:45:07.157625 containerd[1457]: time="2026-01-20T00:45:07.157470479Z" level=info msg="CreateContainer within sandbox \"e50acc7fcda1dd1b159c60f17ca38b89f9058bdd56860424a8d24743f3ca3d7a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c0cecd06ceee82d00eebb73b2a6a8a990d8efabb3517c54f93bfdb5d52372be6\"" Jan 20 00:45:07.158886 containerd[1457]: time="2026-01-20T00:45:07.158860403Z" level=info msg="StartContainer for \"c0cecd06ceee82d00eebb73b2a6a8a990d8efabb3517c54f93bfdb5d52372be6\"" Jan 20 00:45:07.321971 systemd[1]: Started cri-containerd-c0cecd06ceee82d00eebb73b2a6a8a990d8efabb3517c54f93bfdb5d52372be6.scope - libcontainer container c0cecd06ceee82d00eebb73b2a6a8a990d8efabb3517c54f93bfdb5d52372be6. Jan 20 00:45:07.511697 containerd[1457]: time="2026-01-20T00:45:07.511548470Z" level=info msg="StartContainer for \"c0cecd06ceee82d00eebb73b2a6a8a990d8efabb3517c54f93bfdb5d52372be6\" returns successfully" Jan 20 00:45:07.897554 kubelet[1775]: E0120 00:45:07.897365 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:08.172960 kubelet[1775]: I0120 00:45:08.172579 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-qxhmp" podStartSLOduration=8.261056023 podStartE2EDuration="16.172553241s" podCreationTimestamp="2026-01-20 00:44:52 +0000 UTC" firstStartedPulling="2026-01-20 00:44:59.206739893 +0000 UTC m=+24.450623932" lastFinishedPulling="2026-01-20 00:45:07.118237091 +0000 UTC m=+32.362121150" observedRunningTime="2026-01-20 00:45:08.171686647 +0000 UTC m=+33.415570846" watchObservedRunningTime="2026-01-20 00:45:08.172553241 +0000 UTC m=+33.416437279" Jan 20 00:45:08.899189 kubelet[1775]: E0120 00:45:08.898782 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:09.899985 kubelet[1775]: E0120 00:45:09.899758 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:10.900648 kubelet[1775]: E0120 00:45:10.900559 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:11.901933 kubelet[1775]: E0120 00:45:11.901655 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:12.903029 kubelet[1775]: E0120 00:45:12.902878 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:13.904233 kubelet[1775]: E0120 00:45:13.904107 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:14.363412 systemd[1]: Created slice kubepods-besteffort-podcda1662c_5b65_4bde_992b_169801ce1531.slice - libcontainer container kubepods-besteffort-podcda1662c_5b65_4bde_992b_169801ce1531.slice. Jan 20 00:45:14.449363 kubelet[1775]: I0120 00:45:14.449195 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cda1662c-5b65-4bde-992b-169801ce1531-data\") pod \"nfs-server-provisioner-0\" (UID: \"cda1662c-5b65-4bde-992b-169801ce1531\") " pod="default/nfs-server-provisioner-0" Jan 20 00:45:14.449363 kubelet[1775]: I0120 00:45:14.449265 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776js\" (UniqueName: \"kubernetes.io/projected/cda1662c-5b65-4bde-992b-169801ce1531-kube-api-access-776js\") pod \"nfs-server-provisioner-0\" (UID: \"cda1662c-5b65-4bde-992b-169801ce1531\") " pod="default/nfs-server-provisioner-0" Jan 20 00:45:14.675448 containerd[1457]: time="2026-01-20T00:45:14.675352031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cda1662c-5b65-4bde-992b-169801ce1531,Namespace:default,Attempt:0,}" Jan 20 00:45:14.746681 systemd-networkd[1391]: lxc7dc7fab2870e: Link UP Jan 20 00:45:14.755870 kernel: eth0: renamed from tmpa95ff Jan 20 00:45:14.762852 systemd-networkd[1391]: lxc7dc7fab2870e: Gained carrier Jan 20 00:45:14.905426 kubelet[1775]: E0120 00:45:14.905234 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:15.055197 containerd[1457]: time="2026-01-20T00:45:15.054506926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:45:15.055511 containerd[1457]: time="2026-01-20T00:45:15.054895179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:45:15.057538 containerd[1457]: time="2026-01-20T00:45:15.057372445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:15.057750 containerd[1457]: time="2026-01-20T00:45:15.057602634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:15.088038 systemd[1]: Started cri-containerd-a95fffda069562229f6e3470599446cb4fe014e3127c40bb4435fcd8f17e6bbc.scope - libcontainer container a95fffda069562229f6e3470599446cb4fe014e3127c40bb4435fcd8f17e6bbc. Jan 20 00:45:15.101294 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:45:15.139507 containerd[1457]: time="2026-01-20T00:45:15.139444565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cda1662c-5b65-4bde-992b-169801ce1531,Namespace:default,Attempt:0,} returns sandbox id \"a95fffda069562229f6e3470599446cb4fe014e3127c40bb4435fcd8f17e6bbc\"" Jan 20 00:45:15.142043 containerd[1457]: time="2026-01-20T00:45:15.141980343Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 20 00:45:15.557365 kubelet[1775]: E0120 00:45:15.557232 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:15.906000 kubelet[1775]: E0120 00:45:15.905952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:16.361339 systemd-networkd[1391]: lxc7dc7fab2870e: Gained IPv6LL Jan 20 00:45:16.907271 kubelet[1775]: E0120 00:45:16.906995 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:17.908193 kubelet[1775]: E0120 00:45:17.908100 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:18.018641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606287203.mount: Deactivated successfully. Jan 20 00:45:18.910191 kubelet[1775]: E0120 00:45:18.909870 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:19.911318 kubelet[1775]: E0120 00:45:19.911076 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:20.864160 containerd[1457]: time="2026-01-20T00:45:20.864078864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:20.865228 containerd[1457]: time="2026-01-20T00:45:20.865172159Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 20 00:45:20.866680 containerd[1457]: time="2026-01-20T00:45:20.866628287Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:20.870130 containerd[1457]: time="2026-01-20T00:45:20.870074701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:20.871379 containerd[1457]: time="2026-01-20T00:45:20.871326665Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.729282112s" Jan 20 00:45:20.871437 containerd[1457]: time="2026-01-20T00:45:20.871386015Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 20 00:45:20.877774 containerd[1457]: time="2026-01-20T00:45:20.877731563Z" level=info msg="CreateContainer within sandbox \"a95fffda069562229f6e3470599446cb4fe014e3127c40bb4435fcd8f17e6bbc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 20 00:45:20.894604 containerd[1457]: time="2026-01-20T00:45:20.894535044Z" level=info msg="CreateContainer within sandbox \"a95fffda069562229f6e3470599446cb4fe014e3127c40bb4435fcd8f17e6bbc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"67be13b1d0b9011a19cfb17b9a811b97b4a8ede9dafae6608ae89a54e53d929d\"" Jan 20 00:45:20.895355 containerd[1457]: time="2026-01-20T00:45:20.895288667Z" level=info msg="StartContainer for \"67be13b1d0b9011a19cfb17b9a811b97b4a8ede9dafae6608ae89a54e53d929d\"" Jan 20 00:45:20.912048 kubelet[1775]: E0120 00:45:20.911895 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:20.978050 systemd[1]: Started cri-containerd-67be13b1d0b9011a19cfb17b9a811b97b4a8ede9dafae6608ae89a54e53d929d.scope - libcontainer container 67be13b1d0b9011a19cfb17b9a811b97b4a8ede9dafae6608ae89a54e53d929d. Jan 20 00:45:21.043348 containerd[1457]: time="2026-01-20T00:45:21.043242816Z" level=info msg="StartContainer for \"67be13b1d0b9011a19cfb17b9a811b97b4a8ede9dafae6608ae89a54e53d929d\" returns successfully" Jan 20 00:45:21.721591 kubelet[1775]: I0120 00:45:21.721515 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.990775454 podStartE2EDuration="7.721500813s" podCreationTimestamp="2026-01-20 00:45:14 +0000 UTC" firstStartedPulling="2026-01-20 00:45:15.14157879 +0000 UTC m=+40.385462829" lastFinishedPulling="2026-01-20 00:45:20.872304149 +0000 UTC m=+46.116188188" observedRunningTime="2026-01-20 00:45:21.721068456 +0000 UTC m=+46.964952506" watchObservedRunningTime="2026-01-20 00:45:21.721500813 +0000 UTC m=+46.965384862" Jan 20 00:45:21.913348 kubelet[1775]: E0120 00:45:21.913192 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:22.914515 kubelet[1775]: E0120 00:45:22.914196 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:23.915294 kubelet[1775]: E0120 00:45:23.915017 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:24.917331 kubelet[1775]: E0120 00:45:24.916907 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:25.918511 kubelet[1775]: E0120 00:45:25.918252 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:26.347666 systemd[1]: Created slice kubepods-besteffort-pod08439198_b184_4653_9605_7e474b063c35.slice - libcontainer container kubepods-besteffort-pod08439198_b184_4653_9605_7e474b063c35.slice. Jan 20 00:45:26.528167 kubelet[1775]: I0120 00:45:26.528030 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59d62d7a-8df5-4d35-8721-25133b9a09b6\" (UniqueName: \"kubernetes.io/nfs/08439198-b184-4653-9605-7e474b063c35-pvc-59d62d7a-8df5-4d35-8721-25133b9a09b6\") pod \"test-pod-1\" (UID: \"08439198-b184-4653-9605-7e474b063c35\") " pod="default/test-pod-1" Jan 20 00:45:26.528167 kubelet[1775]: I0120 00:45:26.528103 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lw2n\" (UniqueName: \"kubernetes.io/projected/08439198-b184-4653-9605-7e474b063c35-kube-api-access-2lw2n\") pod \"test-pod-1\" (UID: \"08439198-b184-4653-9605-7e474b063c35\") " pod="default/test-pod-1" Jan 20 00:45:26.686899 kernel: FS-Cache: Loaded Jan 20 00:45:26.780330 kernel: RPC: Registered named UNIX socket transport module. Jan 20 00:45:26.780596 kernel: RPC: Registered udp transport module. Jan 20 00:45:26.780639 kernel: RPC: Registered tcp transport module. Jan 20 00:45:26.781990 kernel: RPC: Registered tcp-with-tls transport module. Jan 20 00:45:26.783934 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 20 00:45:26.919655 kubelet[1775]: E0120 00:45:26.919508 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:27.076418 kernel: NFS: Registering the id_resolver key type Jan 20 00:45:27.076967 kernel: Key type id_resolver registered Jan 20 00:45:27.076997 kernel: Key type id_legacy registered Jan 20 00:45:27.121936 nfsidmap[3202]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:45:27.131506 nfsidmap[3205]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:45:27.256489 containerd[1457]: time="2026-01-20T00:45:27.256305193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:08439198-b184-4653-9605-7e474b063c35,Namespace:default,Attempt:0,}" Jan 20 00:45:27.305846 systemd-networkd[1391]: lxc17141fa8be4a: Link UP Jan 20 00:45:27.319855 kernel: eth0: renamed from tmp77964 Jan 20 00:45:27.331884 systemd-networkd[1391]: lxc17141fa8be4a: Gained carrier Jan 20 00:45:27.550952 containerd[1457]: time="2026-01-20T00:45:27.550510491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:45:27.550952 containerd[1457]: time="2026-01-20T00:45:27.550559984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:45:27.550952 containerd[1457]: time="2026-01-20T00:45:27.550570103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:27.550952 containerd[1457]: time="2026-01-20T00:45:27.550662605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:27.577997 systemd[1]: Started cri-containerd-77964f062f245dfe6b1c0cbd2eeebed7a7f3cbf3541e61035de566dea52bb9c7.scope - libcontainer container 77964f062f245dfe6b1c0cbd2eeebed7a7f3cbf3541e61035de566dea52bb9c7. Jan 20 00:45:27.590365 systemd-resolved[1392]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:45:27.622765 containerd[1457]: time="2026-01-20T00:45:27.622606732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:08439198-b184-4653-9605-7e474b063c35,Namespace:default,Attempt:0,} returns sandbox id \"77964f062f245dfe6b1c0cbd2eeebed7a7f3cbf3541e61035de566dea52bb9c7\"" Jan 20 00:45:27.624742 containerd[1457]: time="2026-01-20T00:45:27.624674951Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:45:27.721316 containerd[1457]: time="2026-01-20T00:45:27.721126857Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:27.722656 containerd[1457]: time="2026-01-20T00:45:27.722509243Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 20 00:45:27.730016 containerd[1457]: time="2026-01-20T00:45:27.729570835Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 104.773535ms" Jan 20 00:45:27.730016 containerd[1457]: time="2026-01-20T00:45:27.729745601Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:45:27.741198 containerd[1457]: time="2026-01-20T00:45:27.741154792Z" level=info msg="CreateContainer within sandbox \"77964f062f245dfe6b1c0cbd2eeebed7a7f3cbf3541e61035de566dea52bb9c7\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 20 00:45:27.761981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275934876.mount: Deactivated successfully. Jan 20 00:45:27.766206 containerd[1457]: time="2026-01-20T00:45:27.766106776Z" level=info msg="CreateContainer within sandbox \"77964f062f245dfe6b1c0cbd2eeebed7a7f3cbf3541e61035de566dea52bb9c7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"67fe53ca366e54b274803c2e48459000cebd1fabb6a9d6eb33c50fe8576379a3\"" Jan 20 00:45:27.768272 containerd[1457]: time="2026-01-20T00:45:27.768247747Z" level=info msg="StartContainer for \"67fe53ca366e54b274803c2e48459000cebd1fabb6a9d6eb33c50fe8576379a3\"" Jan 20 00:45:27.822115 systemd[1]: Started cri-containerd-67fe53ca366e54b274803c2e48459000cebd1fabb6a9d6eb33c50fe8576379a3.scope - libcontainer container 67fe53ca366e54b274803c2e48459000cebd1fabb6a9d6eb33c50fe8576379a3. Jan 20 00:45:27.884508 containerd[1457]: time="2026-01-20T00:45:27.884334140Z" level=info msg="StartContainer for \"67fe53ca366e54b274803c2e48459000cebd1fabb6a9d6eb33c50fe8576379a3\" returns successfully" Jan 20 00:45:27.921710 kubelet[1775]: E0120 00:45:27.921330 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:28.766282 kubelet[1775]: I0120 00:45:28.766108 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.655807474 podStartE2EDuration="14.766088257s" podCreationTimestamp="2026-01-20 00:45:14 +0000 UTC" firstStartedPulling="2026-01-20 00:45:27.623951159 +0000 UTC m=+52.867835208" lastFinishedPulling="2026-01-20 00:45:27.734231932 +0000 UTC m=+52.978115991" observedRunningTime="2026-01-20 00:45:28.766016829 +0000 UTC m=+54.009900899" watchObservedRunningTime="2026-01-20 00:45:28.766088257 +0000 UTC m=+54.009972296" Jan 20 00:45:28.923130 kubelet[1775]: E0120 00:45:28.923034 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:29.345155 systemd-networkd[1391]: lxc17141fa8be4a: Gained IPv6LL Jan 20 00:45:29.923850 kubelet[1775]: E0120 00:45:29.923760 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:30.925161 kubelet[1775]: E0120 00:45:30.925038 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:31.926168 kubelet[1775]: E0120 00:45:31.925943 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:32.121939 containerd[1457]: time="2026-01-20T00:45:32.120162185Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:45:32.144256 containerd[1457]: time="2026-01-20T00:45:32.144160486Z" level=info msg="StopContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" with timeout 2 (s)" Jan 20 00:45:32.144561 containerd[1457]: time="2026-01-20T00:45:32.144515513Z" level=info msg="Stop container \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" with signal terminated" Jan 20 00:45:32.163170 systemd-networkd[1391]: lxc_health: Link DOWN Jan 20 00:45:32.163182 systemd-networkd[1391]: lxc_health: Lost carrier Jan 20 00:45:32.195783 systemd[1]: cri-containerd-54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20.scope: Deactivated successfully. Jan 20 00:45:32.201504 systemd[1]: cri-containerd-54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20.scope: Consumed 9.465s CPU time. Jan 20 00:45:32.267760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20-rootfs.mount: Deactivated successfully. Jan 20 00:45:32.283441 containerd[1457]: time="2026-01-20T00:45:32.283351982Z" level=info msg="shim disconnected" id=54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20 namespace=k8s.io Jan 20 00:45:32.283959 containerd[1457]: time="2026-01-20T00:45:32.283875148Z" level=warning msg="cleaning up after shim disconnected" id=54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20 namespace=k8s.io Jan 20 00:45:32.283959 containerd[1457]: time="2026-01-20T00:45:32.283919740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:32.312129 containerd[1457]: time="2026-01-20T00:45:32.311995518Z" level=info msg="StopContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" returns successfully" Jan 20 00:45:32.313894 containerd[1457]: time="2026-01-20T00:45:32.313760130Z" level=info msg="StopPodSandbox for \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\"" Jan 20 00:45:32.314145 containerd[1457]: time="2026-01-20T00:45:32.313931691Z" level=info msg="Container to stop \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:45:32.314145 containerd[1457]: time="2026-01-20T00:45:32.313956648Z" level=info msg="Container to stop \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:45:32.314145 containerd[1457]: time="2026-01-20T00:45:32.313972898Z" level=info msg="Container to stop \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:45:32.314145 containerd[1457]: time="2026-01-20T00:45:32.313990290Z" level=info msg="Container to stop \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:45:32.314145 containerd[1457]: time="2026-01-20T00:45:32.314005769Z" level=info msg="Container to stop \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:45:32.316505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52-shm.mount: Deactivated successfully. Jan 20 00:45:32.325906 systemd[1]: cri-containerd-19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52.scope: Deactivated successfully. Jan 20 00:45:32.364274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52-rootfs.mount: Deactivated successfully. Jan 20 00:45:32.372192 containerd[1457]: time="2026-01-20T00:45:32.372108677Z" level=info msg="shim disconnected" id=19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52 namespace=k8s.io Jan 20 00:45:32.372521 containerd[1457]: time="2026-01-20T00:45:32.372177747Z" level=warning msg="cleaning up after shim disconnected" id=19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52 namespace=k8s.io Jan 20 00:45:32.372521 containerd[1457]: time="2026-01-20T00:45:32.372372078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:32.392909 containerd[1457]: time="2026-01-20T00:45:32.392778025Z" level=info msg="TearDown network for sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" successfully" Jan 20 00:45:32.392909 containerd[1457]: time="2026-01-20T00:45:32.392874805Z" level=info msg="StopPodSandbox for \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" returns successfully" Jan 20 00:45:32.608943 kubelet[1775]: I0120 00:45:32.608617 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-config-path\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.608943 kubelet[1775]: I0120 00:45:32.608748 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-xtables-lock\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.608943 kubelet[1775]: I0120 00:45:32.608782 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-lib-modules\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.608943 kubelet[1775]: I0120 00:45:32.608874 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f592f3a2-5321-4edb-b858-4d24a7f109b3-clustermesh-secrets\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.608943 kubelet[1775]: I0120 00:45:32.608900 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-hubble-tls\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.612851 kubelet[1775]: I0120 00:45:32.608925 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.612851 kubelet[1775]: I0120 00:45:32.608967 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.612851 kubelet[1775]: I0120 00:45:32.610609 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.612851 kubelet[1775]: I0120 00:45:32.610485 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-net\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.612851 kubelet[1775]: I0120 00:45:32.610776 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-bpf-maps\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.610853 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-run\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.610878 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cni-path\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.610901 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-etc-cni-netd\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.610921 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-cgroup\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.610974 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-kernel\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613083 kubelet[1775]: I0120 00:45:32.611003 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdkg\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-kube-api-access-hbdkg\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611027 1775 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-hostproc\") pod \"f592f3a2-5321-4edb-b858-4d24a7f109b3\" (UID: \"f592f3a2-5321-4edb-b858-4d24a7f109b3\") " Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611072 1775 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-lib-modules\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611086 1775 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-net\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611099 1775 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-xtables-lock\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611130 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613286 kubelet[1775]: I0120 00:45:32.611155 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613470 kubelet[1775]: I0120 00:45:32.611174 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613470 kubelet[1775]: I0120 00:45:32.611191 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613470 kubelet[1775]: I0120 00:45:32.611213 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613470 kubelet[1775]: I0120 00:45:32.611236 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.613470 kubelet[1775]: I0120 00:45:32.611258 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:45:32.615169 kubelet[1775]: I0120 00:45:32.615057 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:45:32.616279 kubelet[1775]: I0120 00:45:32.616143 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-kube-api-access-hbdkg" (OuterVolumeSpecName: "kube-api-access-hbdkg") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "kube-api-access-hbdkg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:45:32.616614 kubelet[1775]: I0120 00:45:32.616594 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f592f3a2-5321-4edb-b858-4d24a7f109b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:45:32.616774 kubelet[1775]: I0120 00:45:32.616593 1775 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f592f3a2-5321-4edb-b858-4d24a7f109b3" (UID: "f592f3a2-5321-4edb-b858-4d24a7f109b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:45:32.616998 systemd[1]: var-lib-kubelet-pods-f592f3a2\x2d5321\x2d4edb\x2db858\x2d4d24a7f109b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhbdkg.mount: Deactivated successfully. Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711617 1775 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f592f3a2-5321-4edb-b858-4d24a7f109b3-clustermesh-secrets\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711675 1775 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-hubble-tls\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711720 1775 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-bpf-maps\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711728 1775 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-run\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711737 1775 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cni-path\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711744 1775 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-etc-cni-netd\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711750 1775 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-cgroup\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.711844 kubelet[1775]: I0120 00:45:32.711759 1775 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-host-proc-sys-kernel\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.712209 kubelet[1775]: I0120 00:45:32.711766 1775 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbdkg\" (UniqueName: \"kubernetes.io/projected/f592f3a2-5321-4edb-b858-4d24a7f109b3-kube-api-access-hbdkg\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.712209 kubelet[1775]: I0120 00:45:32.711773 1775 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f592f3a2-5321-4edb-b858-4d24a7f109b3-hostproc\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.712209 kubelet[1775]: I0120 00:45:32.711780 1775 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f592f3a2-5321-4edb-b858-4d24a7f109b3-cilium-config-path\") on node \"10.0.0.96\" DevicePath \"\"" Jan 20 00:45:32.773728 kubelet[1775]: I0120 00:45:32.773632 1775 scope.go:117] "RemoveContainer" containerID="54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20" Jan 20 00:45:32.777638 containerd[1457]: time="2026-01-20T00:45:32.777576082Z" level=info msg="RemoveContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\"" Jan 20 00:45:32.781035 systemd[1]: Removed slice kubepods-burstable-podf592f3a2_5321_4edb_b858_4d24a7f109b3.slice - libcontainer container kubepods-burstable-podf592f3a2_5321_4edb_b858_4d24a7f109b3.slice. Jan 20 00:45:32.781181 systemd[1]: kubepods-burstable-podf592f3a2_5321_4edb_b858_4d24a7f109b3.slice: Consumed 9.606s CPU time. Jan 20 00:45:32.783520 containerd[1457]: time="2026-01-20T00:45:32.783469572Z" level=info msg="RemoveContainer for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" returns successfully" Jan 20 00:45:32.784197 kubelet[1775]: I0120 00:45:32.784150 1775 scope.go:117] "RemoveContainer" containerID="d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c" Jan 20 00:45:32.786561 containerd[1457]: time="2026-01-20T00:45:32.786501257Z" level=info msg="RemoveContainer for \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\"" Jan 20 00:45:32.791778 containerd[1457]: time="2026-01-20T00:45:32.791605176Z" level=info msg="RemoveContainer for \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\" returns successfully" Jan 20 00:45:32.792279 kubelet[1775]: I0120 00:45:32.792194 1775 scope.go:117] "RemoveContainer" containerID="cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38" Jan 20 00:45:32.793907 containerd[1457]: time="2026-01-20T00:45:32.793773100Z" level=info msg="RemoveContainer for \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\"" Jan 20 00:45:32.802747 containerd[1457]: time="2026-01-20T00:45:32.802650058Z" level=info msg="RemoveContainer for \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\" returns successfully" Jan 20 00:45:32.803226 kubelet[1775]: I0120 00:45:32.803068 1775 scope.go:117] "RemoveContainer" containerID="b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9" Jan 20 00:45:32.804970 containerd[1457]: time="2026-01-20T00:45:32.804918119Z" level=info msg="RemoveContainer for \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\"" Jan 20 00:45:32.810865 containerd[1457]: time="2026-01-20T00:45:32.810731985Z" level=info msg="RemoveContainer for \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\" returns successfully" Jan 20 00:45:32.811297 kubelet[1775]: I0120 00:45:32.811248 1775 scope.go:117] "RemoveContainer" containerID="ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d" Jan 20 00:45:32.812725 containerd[1457]: time="2026-01-20T00:45:32.812636099Z" level=info msg="RemoveContainer for \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\"" Jan 20 00:45:32.816206 containerd[1457]: time="2026-01-20T00:45:32.816118717Z" level=info msg="RemoveContainer for \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\" returns successfully" Jan 20 00:45:32.816490 kubelet[1775]: I0120 00:45:32.816456 1775 scope.go:117] "RemoveContainer" containerID="54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20" Jan 20 00:45:32.816917 containerd[1457]: time="2026-01-20T00:45:32.816782401Z" level=error msg="ContainerStatus for \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\": not found" Jan 20 00:45:32.817232 kubelet[1775]: E0120 00:45:32.817159 1775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\": not found" containerID="54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20" Jan 20 00:45:32.817316 kubelet[1775]: I0120 00:45:32.817260 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20"} err="failed to get container status \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\": rpc error: code = NotFound desc = an error occurred when try to find container \"54af49b9f9d70de20d7b422c4d8db65a5c0ca269259cc1b9313dce13b47d7d20\": not found" Jan 20 00:45:32.817376 kubelet[1775]: I0120 00:45:32.817319 1775 scope.go:117] "RemoveContainer" containerID="d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c" Jan 20 00:45:32.817635 containerd[1457]: time="2026-01-20T00:45:32.817582248Z" level=error msg="ContainerStatus for \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\": not found" Jan 20 00:45:32.817974 kubelet[1775]: E0120 00:45:32.817906 1775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\": not found" containerID="d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c" Jan 20 00:45:32.817974 kubelet[1775]: I0120 00:45:32.817956 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c"} err="failed to get container status \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0a3f0991f23615afe16832246773bf14e42cc1b03327c3690938fa89034dd7c\": not found" Jan 20 00:45:32.818040 kubelet[1775]: I0120 00:45:32.817982 1775 scope.go:117] "RemoveContainer" containerID="cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38" Jan 20 00:45:32.818246 containerd[1457]: time="2026-01-20T00:45:32.818198247Z" level=error msg="ContainerStatus for \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\": not found" Jan 20 00:45:32.818364 kubelet[1775]: E0120 00:45:32.818322 1775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\": not found" containerID="cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38" Jan 20 00:45:32.818395 kubelet[1775]: I0120 00:45:32.818375 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38"} err="failed to get container status \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\": rpc error: code = NotFound desc = an error occurred when try to find container \"cacb61809b2ec8ab0b18bd6696065834d2407243d24efbe115235cab2312bb38\": not found" Jan 20 00:45:32.818421 kubelet[1775]: I0120 00:45:32.818398 1775 scope.go:117] "RemoveContainer" containerID="b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9" Jan 20 00:45:32.818677 containerd[1457]: time="2026-01-20T00:45:32.818642033Z" level=error msg="ContainerStatus for \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\": not found" Jan 20 00:45:32.818911 kubelet[1775]: E0120 00:45:32.818853 1775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\": not found" containerID="b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9" Jan 20 00:45:32.818911 kubelet[1775]: I0120 00:45:32.818872 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9"} err="failed to get container status \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4f806c89ad88eec04f8bada8a30bee37564aa0d4a4552f219a31664de103cb9\": not found" Jan 20 00:45:32.818911 kubelet[1775]: I0120 00:45:32.818886 1775 scope.go:117] "RemoveContainer" containerID="ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d" Jan 20 00:45:32.819102 containerd[1457]: time="2026-01-20T00:45:32.819066106Z" level=error msg="ContainerStatus for \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\": not found" Jan 20 00:45:32.819289 kubelet[1775]: E0120 00:45:32.819248 1775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\": not found" containerID="ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d" Jan 20 00:45:32.819395 kubelet[1775]: I0120 00:45:32.819299 1775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d"} err="failed to get container status \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec30e075cf7c9217e38420b2940da2b15b42fa6fe9fafbb83551965d2a946d3d\": not found" Jan 20 00:45:32.927192 kubelet[1775]: E0120 00:45:32.927020 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:33.093299 systemd[1]: var-lib-kubelet-pods-f592f3a2\x2d5321\x2d4edb\x2db858\x2d4d24a7f109b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 00:45:33.093490 systemd[1]: var-lib-kubelet-pods-f592f3a2\x2d5321\x2d4edb\x2db858\x2d4d24a7f109b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 00:45:33.726283 kubelet[1775]: I0120 00:45:33.726060 1775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f592f3a2-5321-4edb-b858-4d24a7f109b3" path="/var/lib/kubelet/pods/f592f3a2-5321-4edb-b858-4d24a7f109b3/volumes" Jan 20 00:45:33.928236 kubelet[1775]: E0120 00:45:33.927778 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:34.929285 kubelet[1775]: E0120 00:45:34.929088 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:35.055077 systemd[1]: Created slice kubepods-burstable-pode7a093a9_a656_4500_b2f2_d7787c7431f0.slice - libcontainer container kubepods-burstable-pode7a093a9_a656_4500_b2f2_d7787c7431f0.slice. Jan 20 00:45:35.088084 systemd[1]: Created slice kubepods-besteffort-pod865b3a10_f6b8_4d0a_b372_85ae16759b89.slice - libcontainer container kubepods-besteffort-pod865b3a10_f6b8_4d0a_b372_85ae16759b89.slice. Jan 20 00:45:35.133054 kubelet[1775]: I0120 00:45:35.132423 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-host-proc-sys-kernel\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133054 kubelet[1775]: I0120 00:45:35.132552 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-bpf-maps\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133054 kubelet[1775]: I0120 00:45:35.132578 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-etc-cni-netd\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133054 kubelet[1775]: I0120 00:45:35.132603 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a093a9-a656-4500-b2f2-d7787c7431f0-cilium-config-path\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133054 kubelet[1775]: I0120 00:45:35.132622 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e7a093a9-a656-4500-b2f2-d7787c7431f0-cilium-ipsec-secrets\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133401 kubelet[1775]: I0120 00:45:35.132643 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgf52\" (UniqueName: \"kubernetes.io/projected/e7a093a9-a656-4500-b2f2-d7787c7431f0-kube-api-access-jgf52\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133401 kubelet[1775]: I0120 00:45:35.132665 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xw8f\" (UniqueName: \"kubernetes.io/projected/865b3a10-f6b8-4d0a-b372-85ae16759b89-kube-api-access-4xw8f\") pod \"cilium-operator-6f9c7c5859-cgf9v\" (UID: \"865b3a10-f6b8-4d0a-b372-85ae16759b89\") " pod="kube-system/cilium-operator-6f9c7c5859-cgf9v" Jan 20 00:45:35.133401 kubelet[1775]: I0120 00:45:35.132945 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-cilium-cgroup\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133401 kubelet[1775]: I0120 00:45:35.132977 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-cni-path\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133401 kubelet[1775]: I0120 00:45:35.132997 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-xtables-lock\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133578 kubelet[1775]: I0120 00:45:35.133016 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a093a9-a656-4500-b2f2-d7787c7431f0-hubble-tls\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133578 kubelet[1775]: I0120 00:45:35.133036 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-hostproc\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133578 kubelet[1775]: I0120 00:45:35.133048 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a093a9-a656-4500-b2f2-d7787c7431f0-clustermesh-secrets\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.133578 kubelet[1775]: I0120 00:45:35.133078 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/865b3a10-f6b8-4d0a-b372-85ae16759b89-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-cgf9v\" (UID: \"865b3a10-f6b8-4d0a-b372-85ae16759b89\") " pod="kube-system/cilium-operator-6f9c7c5859-cgf9v" Jan 20 00:45:35.133578 kubelet[1775]: I0120 00:45:35.133094 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-cilium-run\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.134020 kubelet[1775]: I0120 00:45:35.133107 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-lib-modules\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.134020 kubelet[1775]: I0120 00:45:35.133125 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a093a9-a656-4500-b2f2-d7787c7431f0-host-proc-sys-net\") pod \"cilium-l85lt\" (UID: \"e7a093a9-a656-4500-b2f2-d7787c7431f0\") " pod="kube-system/cilium-l85lt" Jan 20 00:45:35.389435 kubelet[1775]: E0120 00:45:35.389348 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:35.390592 containerd[1457]: time="2026-01-20T00:45:35.390519140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l85lt,Uid:e7a093a9-a656-4500-b2f2-d7787c7431f0,Namespace:kube-system,Attempt:0,}" Jan 20 00:45:35.395120 kubelet[1775]: E0120 00:45:35.395001 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:35.396027 containerd[1457]: time="2026-01-20T00:45:35.395883451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-cgf9v,Uid:865b3a10-f6b8-4d0a-b372-85ae16759b89,Namespace:kube-system,Attempt:0,}" Jan 20 00:45:35.452154 containerd[1457]: time="2026-01-20T00:45:35.451908546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:45:35.452154 containerd[1457]: time="2026-01-20T00:45:35.451988295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:45:35.452154 containerd[1457]: time="2026-01-20T00:45:35.451999616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:35.452154 containerd[1457]: time="2026-01-20T00:45:35.452086499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:35.463199 containerd[1457]: time="2026-01-20T00:45:35.463027502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:45:35.464140 containerd[1457]: time="2026-01-20T00:45:35.463980005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:45:35.464140 containerd[1457]: time="2026-01-20T00:45:35.464068761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:35.464381 containerd[1457]: time="2026-01-20T00:45:35.464204544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:45:35.487247 systemd[1]: Started cri-containerd-c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f.scope - libcontainer container c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f. Jan 20 00:45:35.492981 systemd[1]: Started cri-containerd-bb44f465f110a0d43adc4e8616052dab22eda1f6736d10d3ae314ec63e1bc71c.scope - libcontainer container bb44f465f110a0d43adc4e8616052dab22eda1f6736d10d3ae314ec63e1bc71c. Jan 20 00:45:35.529971 containerd[1457]: time="2026-01-20T00:45:35.529124094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l85lt,Uid:e7a093a9-a656-4500-b2f2-d7787c7431f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\"" Jan 20 00:45:35.531183 kubelet[1775]: E0120 00:45:35.531137 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:35.549951 containerd[1457]: time="2026-01-20T00:45:35.549348665Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:45:35.557306 kubelet[1775]: E0120 00:45:35.557269 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:35.566892 containerd[1457]: time="2026-01-20T00:45:35.566747137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-cgf9v,Uid:865b3a10-f6b8-4d0a-b372-85ae16759b89,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb44f465f110a0d43adc4e8616052dab22eda1f6736d10d3ae314ec63e1bc71c\"" Jan 20 00:45:35.568153 kubelet[1775]: E0120 00:45:35.568118 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:35.569755 containerd[1457]: time="2026-01-20T00:45:35.569616551Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 00:45:35.571039 containerd[1457]: time="2026-01-20T00:45:35.570962625Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628\"" Jan 20 00:45:35.571876 containerd[1457]: time="2026-01-20T00:45:35.571549624Z" level=info msg="StartContainer for \"6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628\"" Jan 20 00:45:35.595170 containerd[1457]: time="2026-01-20T00:45:35.595101263Z" level=info msg="StopPodSandbox for \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\"" Jan 20 00:45:35.595280 containerd[1457]: time="2026-01-20T00:45:35.595239922Z" level=info msg="TearDown network for sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" successfully" Jan 20 00:45:35.595280 containerd[1457]: time="2026-01-20T00:45:35.595257184Z" level=info msg="StopPodSandbox for \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" returns successfully" Jan 20 00:45:35.596178 containerd[1457]: time="2026-01-20T00:45:35.596120734Z" level=info msg="RemovePodSandbox for \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\"" Jan 20 00:45:35.596243 containerd[1457]: time="2026-01-20T00:45:35.596179133Z" level=info msg="Forcibly stopping sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\"" Jan 20 00:45:35.596243 containerd[1457]: time="2026-01-20T00:45:35.596229497Z" level=info msg="TearDown network for sandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" successfully" Jan 20 00:45:35.604328 containerd[1457]: time="2026-01-20T00:45:35.602484289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:45:35.604328 containerd[1457]: time="2026-01-20T00:45:35.602532980Z" level=info msg="RemovePodSandbox \"19c68dc05b991880034f8d4cb80e7468ffef0509da4c92928a68d9f2e1039c52\" returns successfully" Jan 20 00:45:35.621262 systemd[1]: Started cri-containerd-6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628.scope - libcontainer container 6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628. Jan 20 00:45:35.692902 containerd[1457]: time="2026-01-20T00:45:35.691440650Z" level=info msg="StartContainer for \"6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628\" returns successfully" Jan 20 00:45:35.702048 systemd[1]: cri-containerd-6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628.scope: Deactivated successfully. Jan 20 00:45:35.754156 containerd[1457]: time="2026-01-20T00:45:35.753993899Z" level=info msg="shim disconnected" id=6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628 namespace=k8s.io Jan 20 00:45:35.754156 containerd[1457]: time="2026-01-20T00:45:35.754071844Z" level=warning msg="cleaning up after shim disconnected" id=6bafcfd5a04328c8ae7e818479e2413d2f248ff92356a0bb42747c9931b2a628 namespace=k8s.io Jan 20 00:45:35.754156 containerd[1457]: time="2026-01-20T00:45:35.754087333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:35.791585 kubelet[1775]: E0120 00:45:35.791481 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:35.801553 containerd[1457]: time="2026-01-20T00:45:35.801225649Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:45:35.827748 containerd[1457]: time="2026-01-20T00:45:35.827594073Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71\"" Jan 20 00:45:35.828787 containerd[1457]: time="2026-01-20T00:45:35.828609092Z" level=info msg="StartContainer for \"fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71\"" Jan 20 00:45:35.881078 systemd[1]: Started cri-containerd-fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71.scope - libcontainer container fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71. Jan 20 00:45:35.904615 kubelet[1775]: E0120 00:45:35.904461 1775 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:45:35.925171 containerd[1457]: time="2026-01-20T00:45:35.925042358Z" level=info msg="StartContainer for \"fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71\" returns successfully" Jan 20 00:45:35.930541 kubelet[1775]: E0120 00:45:35.930421 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:35.943990 systemd[1]: cri-containerd-fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71.scope: Deactivated successfully. Jan 20 00:45:35.980411 containerd[1457]: time="2026-01-20T00:45:35.980310070Z" level=info msg="shim disconnected" id=fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71 namespace=k8s.io Jan 20 00:45:35.980411 containerd[1457]: time="2026-01-20T00:45:35.980404355Z" level=warning msg="cleaning up after shim disconnected" id=fe558030bf3c9a378e04a0f54b5fd3ce357a15f82a38b582d062e42b3959df71 namespace=k8s.io Jan 20 00:45:35.980786 containerd[1457]: time="2026-01-20T00:45:35.980423070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:36.520266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502355991.mount: Deactivated successfully. Jan 20 00:45:36.802310 kubelet[1775]: E0120 00:45:36.801275 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:36.811007 containerd[1457]: time="2026-01-20T00:45:36.810872035Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:45:36.851224 containerd[1457]: time="2026-01-20T00:45:36.851122109Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae\"" Jan 20 00:45:36.852492 containerd[1457]: time="2026-01-20T00:45:36.852391600Z" level=info msg="StartContainer for \"abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae\"" Jan 20 00:45:36.903129 systemd[1]: Started cri-containerd-abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae.scope - libcontainer container abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae. Jan 20 00:45:36.931193 kubelet[1775]: E0120 00:45:36.931069 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:36.961044 systemd[1]: cri-containerd-abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae.scope: Deactivated successfully. Jan 20 00:45:36.967286 containerd[1457]: time="2026-01-20T00:45:36.967096886Z" level=info msg="StartContainer for \"abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae\" returns successfully" Jan 20 00:45:37.001907 kubelet[1775]: I0120 00:45:37.001747 1775 setters.go:543] "Node became not ready" node="10.0.0.96" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T00:45:37Z","lastTransitionTime":"2026-01-20T00:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 00:45:37.051641 containerd[1457]: time="2026-01-20T00:45:37.051550448Z" level=info msg="shim disconnected" id=abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae namespace=k8s.io Jan 20 00:45:37.051923 containerd[1457]: time="2026-01-20T00:45:37.051647499Z" level=warning msg="cleaning up after shim disconnected" id=abba147c3218d5bc368c0a76e2f0437567edcd73d19ba402557096b550ee83ae namespace=k8s.io Jan 20 00:45:37.051923 containerd[1457]: time="2026-01-20T00:45:37.051660242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:37.519525 containerd[1457]: time="2026-01-20T00:45:37.519257313Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:37.520338 containerd[1457]: time="2026-01-20T00:45:37.520298810Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 00:45:37.521954 containerd[1457]: time="2026-01-20T00:45:37.521776377Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:45:37.524077 containerd[1457]: time="2026-01-20T00:45:37.523987208Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.954332665s" Jan 20 00:45:37.524077 containerd[1457]: time="2026-01-20T00:45:37.524038774Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 00:45:37.547110 containerd[1457]: time="2026-01-20T00:45:37.546968552Z" level=info msg="CreateContainer within sandbox \"bb44f465f110a0d43adc4e8616052dab22eda1f6736d10d3ae314ec63e1bc71c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 00:45:37.569492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276496829.mount: Deactivated successfully. Jan 20 00:45:37.573599 containerd[1457]: time="2026-01-20T00:45:37.573488409Z" level=info msg="CreateContainer within sandbox \"bb44f465f110a0d43adc4e8616052dab22eda1f6736d10d3ae314ec63e1bc71c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"816e154a8e11edf21821140bd54bf2d0f713ff8f77276237c36380c023b8a661\"" Jan 20 00:45:37.574754 containerd[1457]: time="2026-01-20T00:45:37.574642018Z" level=info msg="StartContainer for \"816e154a8e11edf21821140bd54bf2d0f713ff8f77276237c36380c023b8a661\"" Jan 20 00:45:37.623215 systemd[1]: Started cri-containerd-816e154a8e11edf21821140bd54bf2d0f713ff8f77276237c36380c023b8a661.scope - libcontainer container 816e154a8e11edf21821140bd54bf2d0f713ff8f77276237c36380c023b8a661. Jan 20 00:45:37.714956 containerd[1457]: time="2026-01-20T00:45:37.714474127Z" level=info msg="StartContainer for \"816e154a8e11edf21821140bd54bf2d0f713ff8f77276237c36380c023b8a661\" returns successfully" Jan 20 00:45:37.812092 kubelet[1775]: E0120 00:45:37.811033 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:37.819100 kubelet[1775]: E0120 00:45:37.817785 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:37.825046 containerd[1457]: time="2026-01-20T00:45:37.824945876Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:45:37.831353 kubelet[1775]: I0120 00:45:37.831204 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-cgf9v" podStartSLOduration=1.8731304720000002 podStartE2EDuration="3.831186632s" podCreationTimestamp="2026-01-20 00:45:34 +0000 UTC" firstStartedPulling="2026-01-20 00:45:35.569294541 +0000 UTC m=+60.813178580" lastFinishedPulling="2026-01-20 00:45:37.527350691 +0000 UTC m=+62.771234740" observedRunningTime="2026-01-20 00:45:37.830511507 +0000 UTC m=+63.074395587" watchObservedRunningTime="2026-01-20 00:45:37.831186632 +0000 UTC m=+63.075070721" Jan 20 00:45:37.853103 containerd[1457]: time="2026-01-20T00:45:37.852996543Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616\"" Jan 20 00:45:37.854001 containerd[1457]: time="2026-01-20T00:45:37.853945170Z" level=info msg="StartContainer for \"88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616\"" Jan 20 00:45:37.907140 systemd[1]: Started cri-containerd-88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616.scope - libcontainer container 88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616. Jan 20 00:45:37.932109 kubelet[1775]: E0120 00:45:37.932049 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:37.946527 systemd[1]: cri-containerd-88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616.scope: Deactivated successfully. Jan 20 00:45:37.952173 containerd[1457]: time="2026-01-20T00:45:37.947400610Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a093a9_a656_4500_b2f2_d7787c7431f0.slice/cri-containerd-88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616.scope/memory.events\": no such file or directory" Jan 20 00:45:37.955669 containerd[1457]: time="2026-01-20T00:45:37.955612498Z" level=info msg="StartContainer for \"88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616\" returns successfully" Jan 20 00:45:37.985207 containerd[1457]: time="2026-01-20T00:45:37.985079446Z" level=info msg="shim disconnected" id=88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616 namespace=k8s.io Jan 20 00:45:37.985207 containerd[1457]: time="2026-01-20T00:45:37.985148486Z" level=warning msg="cleaning up after shim disconnected" id=88c7d2d561a1304511daf88f3cd6aea760f7520d19db3974e7df020451923616 namespace=k8s.io Jan 20 00:45:37.985207 containerd[1457]: time="2026-01-20T00:45:37.985157803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:45:38.829632 kubelet[1775]: E0120 00:45:38.829451 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:38.829632 kubelet[1775]: E0120 00:45:38.829483 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:38.846129 containerd[1457]: time="2026-01-20T00:45:38.845985222Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:45:38.881602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965158614.mount: Deactivated successfully. Jan 20 00:45:38.888397 containerd[1457]: time="2026-01-20T00:45:38.888196679Z" level=info msg="CreateContainer within sandbox \"c0e9280d113bd079bc1fb0f72129458d0cef7b6503a5c843fdeb42c565479f9f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d\"" Jan 20 00:45:38.889294 containerd[1457]: time="2026-01-20T00:45:38.889222166Z" level=info msg="StartContainer for \"1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d\"" Jan 20 00:45:38.933289 kubelet[1775]: E0120 00:45:38.933054 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:38.941588 systemd[1]: Started cri-containerd-1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d.scope - libcontainer container 1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d. Jan 20 00:45:38.989445 containerd[1457]: time="2026-01-20T00:45:38.989348999Z" level=info msg="StartContainer for \"1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d\" returns successfully" Jan 20 00:45:39.573922 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 20 00:45:39.854107 kubelet[1775]: E0120 00:45:39.853498 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:39.943327 kubelet[1775]: E0120 00:45:39.943259 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:40.950220 kubelet[1775]: E0120 00:45:40.949967 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:41.386291 kubelet[1775]: E0120 00:45:41.386198 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:41.951129 kubelet[1775]: E0120 00:45:41.951072 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:42.952290 kubelet[1775]: E0120 00:45:42.952127 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:43.494007 systemd-networkd[1391]: lxc_health: Link UP Jan 20 00:45:43.502369 systemd-networkd[1391]: lxc_health: Gained carrier Jan 20 00:45:43.719611 kubelet[1775]: E0120 00:45:43.719455 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:43.837107 systemd[1]: run-containerd-runc-k8s.io-1621e94606d9966b886310e8f993cda48b3dbfe6448ecbf224da7a414211516d-runc.dE1LMJ.mount: Deactivated successfully. Jan 20 00:45:43.952883 kubelet[1775]: E0120 00:45:43.952781 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:44.953490 kubelet[1775]: E0120 00:45:44.953339 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:44.959146 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 20 00:45:45.387561 kubelet[1775]: E0120 00:45:45.387460 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:45.413861 kubelet[1775]: I0120 00:45:45.413690 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l85lt" podStartSLOduration=11.413671007 podStartE2EDuration="11.413671007s" podCreationTimestamp="2026-01-20 00:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:45:39.899152286 +0000 UTC m=+65.143036345" watchObservedRunningTime="2026-01-20 00:45:45.413671007 +0000 UTC m=+70.657555046" Jan 20 00:45:45.879684 kubelet[1775]: E0120 00:45:45.879417 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:45.953664 kubelet[1775]: E0120 00:45:45.953583 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:46.882351 kubelet[1775]: E0120 00:45:46.882235 1775 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:45:46.954385 kubelet[1775]: E0120 00:45:46.954245 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:47.955991 kubelet[1775]: E0120 00:45:47.955559 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:48.957257 kubelet[1775]: E0120 00:45:48.956968 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:49.958090 kubelet[1775]: E0120 00:45:49.957948 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:50.959244 kubelet[1775]: E0120 00:45:50.958941 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:45:51.960447 kubelet[1775]: E0120 00:45:51.960308 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"