Mar 2 13:20:09.563174 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:20:09.563216 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:20:09.563235 kernel: BIOS-provided physical RAM map: Mar 2 13:20:09.563243 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 13:20:09.563252 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 13:20:09.563325 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 13:20:09.563338 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 13:20:09.563347 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 13:20:09.563354 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:20:09.563370 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 13:20:09.563380 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:20:09.563391 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 13:20:09.563401 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:20:09.563412 kernel: NX (Execute Disable) protection: active Mar 2 13:20:09.563425 kernel: APIC: Static calls initialized Mar 2 13:20:09.563625 kernel: SMBIOS 2.8 present. Mar 2 13:20:09.563638 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 13:20:09.563648 kernel: Hypervisor detected: KVM Mar 2 13:20:09.563658 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:20:09.563667 kernel: kvm-clock: using sched offset of 6464054815 cycles Mar 2 13:20:09.563886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:20:09.563899 kernel: tsc: Detected 2445.426 MHz processor Mar 2 13:20:09.563909 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:20:09.563920 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:20:09.563937 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 13:20:09.563947 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 13:20:09.563958 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:20:09.563967 kernel: Using GB pages for direct mapping Mar 2 13:20:09.563978 kernel: ACPI: Early table checksum verification disabled Mar 2 13:20:09.563988 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 13:20:09.563998 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564008 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564018 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564032 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 13:20:09.564042 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564053 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564063 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564190 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:20:09.564204 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 13:20:09.564216 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 13:20:09.564235 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 13:20:09.564252 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 13:20:09.564324 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 13:20:09.564335 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 13:20:09.564345 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 13:20:09.564358 kernel: No NUMA configuration found Mar 2 13:20:09.564369 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 13:20:09.564385 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 13:20:09.564397 kernel: Zone ranges: Mar 2 13:20:09.564407 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:20:09.564417 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 13:20:09.564428 kernel: Normal empty Mar 2 13:20:09.564438 kernel: Movable zone start for each node Mar 2 13:20:09.564450 kernel: Early memory node ranges Mar 2 13:20:09.564460 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 13:20:09.564472 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 13:20:09.564481 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 13:20:09.564498 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:20:09.564509 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 13:20:09.564519 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 13:20:09.564530 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:20:09.564540 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:20:09.564551 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:20:09.564562 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:20:09.564572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:20:09.564582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:20:09.564596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:20:09.564606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:20:09.564616 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:20:09.564627 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:20:09.564637 kernel: TSC deadline timer available Mar 2 13:20:09.564647 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:20:09.564658 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:20:09.564669 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:20:09.564679 kernel: kvm-guest: setup PV sched yield Mar 2 13:20:09.564692 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 13:20:09.564704 kernel: Booting paravirtualized kernel on KVM Mar 2 13:20:09.564717 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:20:09.564726 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:20:09.564736 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:20:09.564748 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:20:09.564758 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:20:09.564768 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:20:09.564780 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:20:09.564918 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:20:09.564931 kernel: random: crng init done Mar 2 13:20:09.564941 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:20:09.564952 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:20:09.564962 kernel: Fallback order for Node 0: 0 Mar 2 13:20:09.564973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 13:20:09.564983 kernel: Policy zone: DMA32 Mar 2 13:20:09.564994 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:20:09.565005 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 2 13:20:09.565020 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:20:09.565030 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:20:09.565041 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:20:09.565052 kernel: Dynamic Preempt: voluntary Mar 2 13:20:09.565062 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:20:09.565073 kernel: rcu: RCU event tracing is enabled. Mar 2 13:20:09.565084 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:20:09.565094 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:20:09.565104 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:20:09.565122 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:20:09.565132 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:20:09.565141 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:20:09.565151 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:20:09.565161 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:20:09.565171 kernel: Console: colour VGA+ 80x25 Mar 2 13:20:09.565181 kernel: printk: console [ttyS0] enabled Mar 2 13:20:09.565192 kernel: ACPI: Core revision 20230628 Mar 2 13:20:09.565203 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:20:09.565219 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:20:09.565231 kernel: x2apic enabled Mar 2 13:20:09.565241 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:20:09.565251 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:20:09.565314 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:20:09.565328 kernel: kvm-guest: setup PV IPIs Mar 2 13:20:09.565339 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:20:09.565368 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:20:09.565379 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 13:20:09.565388 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:20:09.565399 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:20:09.565415 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:20:09.565427 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:20:09.565438 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:20:09.565449 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:20:09.565460 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:20:09.565477 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:20:09.565489 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:20:09.565499 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:20:09.565510 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:20:09.565522 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:20:09.565533 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:20:09.565544 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:20:09.565555 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:20:09.565570 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:20:09.565581 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:20:09.565593 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:20:09.565607 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:20:09.565617 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:20:09.565627 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:20:09.565639 kernel: landlock: Up and running. Mar 2 13:20:09.565650 kernel: SELinux: Initializing. Mar 2 13:20:09.565660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:20:09.565674 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:20:09.565688 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:20:09.565700 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:20:09.565712 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:20:09.565723 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:20:09.565736 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:20:09.565746 kernel: signal: max sigframe size: 1776 Mar 2 13:20:09.565758 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:20:09.565769 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:20:09.565784 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:20:09.565895 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:20:09.565907 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:20:09.565918 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:20:09.565930 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:20:09.565941 kernel: smpboot: Max logical packages: 1 Mar 2 13:20:09.565952 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 13:20:09.565963 kernel: devtmpfs: initialized Mar 2 13:20:09.565973 kernel: x86/mm: Memory block size: 128MB Mar 2 13:20:09.565990 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:20:09.566001 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:20:09.566013 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:20:09.566024 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:20:09.566035 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:20:09.566046 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:20:09.566058 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:20:09.566070 kernel: audit: type=2000 audit(1772457605.132:1): state=initialized audit_enabled=0 res=1 Mar 2 13:20:09.566082 kernel: cpuidle: using governor menu Mar 2 13:20:09.566098 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:20:09.566109 kernel: dca service started, version 1.12.1 Mar 2 13:20:09.566120 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:20:09.566131 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:20:09.566142 kernel: PCI: Using configuration type 1 for base access Mar 2 13:20:09.566153 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:20:09.566164 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:20:09.566174 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:20:09.566185 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:20:09.566203 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:20:09.566213 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:20:09.566224 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:20:09.566235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:20:09.566245 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:20:09.566257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:20:09.566334 kernel: ACPI: Interpreter enabled Mar 2 13:20:09.566345 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:20:09.566356 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:20:09.566372 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:20:09.566383 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:20:09.566394 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:20:09.566405 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:20:09.566713 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:20:09.567039 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:20:09.567235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:20:09.567254 kernel: PCI host bridge to bus 0000:00 Mar 2 13:20:09.567510 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:20:09.567683 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:20:09.567964 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:20:09.568135 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:20:09.568351 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:20:09.568508 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 13:20:09.568745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:20:09.569103 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:20:09.569439 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:20:09.569627 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 13:20:09.570007 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 13:20:09.570191 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 13:20:09.570435 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:20:09.570648 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:20:09.571016 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 13:20:09.571218 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 13:20:09.571523 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 13:20:09.571735 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:20:09.572052 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 13:20:09.572246 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 13:20:09.572496 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 13:20:09.572965 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:20:09.573257 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 13:20:09.573533 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 13:20:09.573695 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 13:20:09.573976 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 13:20:09.574156 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:20:09.574417 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:20:09.574619 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:20:09.574875 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 13:20:09.575052 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 13:20:09.575242 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:20:09.575488 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 13:20:09.575514 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:20:09.575526 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:20:09.575537 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:20:09.575547 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:20:09.575558 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:20:09.575569 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:20:09.575580 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:20:09.575591 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:20:09.575602 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:20:09.575616 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:20:09.575627 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:20:09.575637 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:20:09.575647 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:20:09.575657 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:20:09.575668 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:20:09.575679 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:20:09.575692 kernel: iommu: Default domain type: Translated Mar 2 13:20:09.575705 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:20:09.575721 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:20:09.575731 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:20:09.575741 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 13:20:09.575754 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 13:20:09.576079 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:20:09.576259 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:20:09.576517 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:20:09.576536 kernel: vgaarb: loaded Mar 2 13:20:09.576553 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:20:09.576565 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:20:09.576578 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:20:09.576588 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:20:09.576599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:20:09.576612 kernel: pnp: PnP ACPI init Mar 2 13:20:09.576910 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:20:09.576932 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:20:09.576944 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:20:09.576963 kernel: NET: Registered PF_INET protocol family Mar 2 13:20:09.576975 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:20:09.576986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:20:09.576997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:20:09.577009 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:20:09.577021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:20:09.577031 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:20:09.577043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:20:09.577060 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:20:09.577072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:20:09.577082 kernel: NET: Registered PF_XDP protocol family Mar 2 13:20:09.577257 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:20:09.577499 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:20:09.577667 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:20:09.577937 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:20:09.578101 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:20:09.578338 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 13:20:09.578365 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:20:09.578375 kernel: Initialise system trusted keyrings Mar 2 13:20:09.578387 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:20:09.578399 kernel: Key type asymmetric registered Mar 2 13:20:09.578411 kernel: Asymmetric key parser 'x509' registered Mar 2 13:20:09.578422 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:20:09.578434 kernel: io scheduler mq-deadline registered Mar 2 13:20:09.578445 kernel: io scheduler kyber registered Mar 2 13:20:09.578457 kernel: io scheduler bfq registered Mar 2 13:20:09.578473 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:20:09.578484 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:20:09.578496 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:20:09.578508 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:20:09.578519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:20:09.578531 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:20:09.578543 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:20:09.578555 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:20:09.578566 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:20:09.578583 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:20:09.578887 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:20:09.579072 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:20:09.579249 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:20:08 UTC (1772457608) Mar 2 13:20:09.579485 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:20:09.579502 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:20:09.579515 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:20:09.579527 kernel: Segment Routing with IPv6 Mar 2 13:20:09.579543 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:20:09.579553 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:20:09.579565 kernel: Key type dns_resolver registered Mar 2 13:20:09.579576 kernel: IPI shorthand broadcast: enabled Mar 2 13:20:09.579587 kernel: sched_clock: Marking stable (2797194415, 459088411)->(3738021971, -481739145) Mar 2 13:20:09.579600 kernel: registered taskstats version 1 Mar 2 13:20:09.579611 kernel: Loading compiled-in X.509 certificates Mar 2 13:20:09.579620 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:20:09.579631 kernel: Key type .fscrypt registered Mar 2 13:20:09.579647 kernel: Key type fscrypt-provisioning registered Mar 2 13:20:09.579658 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:20:09.579669 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:20:09.579679 kernel: ima: No architecture policies found Mar 2 13:20:09.579689 kernel: clk: Disabling unused clocks Mar 2 13:20:09.579700 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:20:09.579713 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:20:09.579725 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:20:09.579739 kernel: Run /init as init process Mar 2 13:20:09.579752 kernel: with arguments: Mar 2 13:20:09.579764 kernel: /init Mar 2 13:20:09.579774 kernel: with environment: Mar 2 13:20:09.579785 kernel: HOME=/ Mar 2 13:20:09.579907 kernel: TERM=linux Mar 2 13:20:09.579925 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:20:09.579941 systemd[1]: Detected virtualization kvm. Mar 2 13:20:09.579961 systemd[1]: Detected architecture x86-64. Mar 2 13:20:09.579973 systemd[1]: Running in initrd. Mar 2 13:20:09.579984 systemd[1]: No hostname configured, using default hostname. Mar 2 13:20:09.579996 systemd[1]: Hostname set to . Mar 2 13:20:09.580008 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:20:09.580019 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:20:09.580032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:20:09.580043 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:20:09.580060 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:20:09.580072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:20:09.580084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:20:09.580096 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:20:09.580110 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:20:09.580120 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:20:09.580132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:20:09.580150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:20:09.580163 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:20:09.580175 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:20:09.580188 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:20:09.580220 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:20:09.580238 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:20:09.580256 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:20:09.580357 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:20:09.580373 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:20:09.580386 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:20:09.580397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:20:09.580408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:20:09.580418 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:20:09.580430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:20:09.580445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:20:09.580466 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:20:09.580477 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:20:09.580490 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:20:09.580504 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:20:09.580561 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 13:20:09.580603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:20:09.580617 systemd-journald[195]: Journal started Mar 2 13:20:09.580644 systemd-journald[195]: Runtime Journal (/run/log/journal/6d1a57f59dae44e2a564fdd5d9b5ad43) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:20:09.589477 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:20:09.589632 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:20:09.600151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:20:09.612400 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:20:09.628050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:20:09.640078 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:20:09.652720 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:20:09.677937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:20:09.715924 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 13:20:09.739687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:20:09.758546 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:20:09.806940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:20:09.810321 kernel: Bridge firewalling registered Mar 2 13:20:09.810345 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 13:20:09.812089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:20:10.190317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:20:10.190920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:20:10.219576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:20:10.254577 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:20:10.274244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:20:10.298642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:20:10.317610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:20:10.337199 systemd-resolved[226]: Positive Trust Anchors: Mar 2 13:20:10.337329 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:20:10.337376 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:20:10.340538 systemd-resolved[226]: Defaulting to hostname 'linux'. Mar 2 13:20:10.342129 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:20:10.362236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:20:10.455606 dracut-cmdline[233]: dracut-dracut-053 Mar 2 13:20:10.464434 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:20:10.641074 kernel: SCSI subsystem initialized Mar 2 13:20:10.657913 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:20:10.680978 kernel: iscsi: registered transport (tcp) Mar 2 13:20:10.720023 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:20:10.720158 kernel: QLogic iSCSI HBA Driver Mar 2 13:20:10.809343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:20:10.825099 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:20:10.889496 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:20:10.889578 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:20:10.889599 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:20:10.959040 kernel: raid6: avx2x4 gen() 23122 MB/s Mar 2 13:20:10.979133 kernel: raid6: avx2x2 gen() 21209 MB/s Mar 2 13:20:10.999997 kernel: raid6: avx2x1 gen() 10111 MB/s Mar 2 13:20:11.000084 kernel: raid6: using algorithm avx2x4 gen() 23122 MB/s Mar 2 13:20:11.027171 kernel: raid6: .... xor() 4621 MB/s, rmw enabled Mar 2 13:20:11.027235 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:20:11.065991 kernel: xor: automatically using best checksumming function avx Mar 2 13:20:11.450929 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:20:11.482208 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:20:11.514742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:20:11.540636 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 13:20:11.550198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:20:11.567912 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:20:11.650603 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 2 13:20:11.735371 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:20:11.766708 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:20:11.868375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:20:11.891200 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:20:11.941474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:20:11.942471 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:20:11.969659 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:20:11.978722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:20:12.009929 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:20:12.015505 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:20:12.017397 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:20:12.037622 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:20:12.048249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:20:12.070493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:20:12.070526 kernel: GPT:9289727 != 19775487 Mar 2 13:20:12.070542 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:20:12.070606 kernel: GPT:9289727 != 19775487 Mar 2 13:20:12.070622 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:20:12.074921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:20:12.095946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:20:12.109016 kernel: libata version 3.00 loaded. Mar 2 13:20:12.096167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:20:12.109172 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:20:12.133392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:20:12.133645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:20:12.148904 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:20:12.188459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:20:12.257583 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:20:12.261465 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:20:12.261491 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (469) Mar 2 13:20:12.261515 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:20:12.261758 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:20:12.262107 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:20:12.262125 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Mar 2 13:20:12.262143 kernel: scsi host0: ahci Mar 2 13:20:12.262670 kernel: scsi host1: ahci Mar 2 13:20:12.242415 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:20:12.269388 kernel: scsi host2: ahci Mar 2 13:20:12.269662 kernel: scsi host3: ahci Mar 2 13:20:12.280148 kernel: scsi host4: ahci Mar 2 13:20:12.317410 kernel: scsi host5: ahci Mar 2 13:20:12.317661 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Mar 2 13:20:12.317681 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Mar 2 13:20:12.317695 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Mar 2 13:20:12.317710 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Mar 2 13:20:12.317736 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Mar 2 13:20:12.318706 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:20:12.339957 kernel: AES CTR mode by8 optimization enabled Mar 2 13:20:12.339989 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Mar 2 13:20:12.343493 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:20:12.352007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:20:12.725205 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:20:12.725232 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:20:12.725243 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:20:12.725253 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:20:12.725268 kernel: ata3.00: applying bridge limits Mar 2 13:20:12.725358 kernel: ata3.00: configured for UDMA/100 Mar 2 13:20:12.725387 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:20:12.725628 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:20:12.725647 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:20:12.725661 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:20:12.723866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:20:12.742426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:20:12.770013 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:20:12.795641 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:20:12.795970 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:20:12.795984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:20:12.795995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:20:12.796005 disk-uuid[566]: Primary Header is updated. Mar 2 13:20:12.796005 disk-uuid[566]: Secondary Entries is updated. Mar 2 13:20:12.796005 disk-uuid[566]: Secondary Header is updated. Mar 2 13:20:12.797342 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:20:12.822898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:20:12.832393 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:20:12.834602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:20:13.814039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:20:13.815168 disk-uuid[567]: The operation has completed successfully. Mar 2 13:20:13.875351 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:20:13.875665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:20:13.922171 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:20:13.936243 sh[595]: Success Mar 2 13:20:13.968077 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:20:14.054253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:20:14.077012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:20:14.084128 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:20:14.144990 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:20:14.145034 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:20:14.145052 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:20:14.145069 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:20:14.145083 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:20:14.147407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:20:14.155109 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:20:14.181918 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:20:14.192009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:20:14.236952 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:20:14.237004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:20:14.237024 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:20:14.253935 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:20:14.275904 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:20:14.287367 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:20:14.305050 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:20:14.319140 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:20:14.426055 ignition[701]: Ignition 2.19.0 Mar 2 13:20:14.426114 ignition[701]: Stage: fetch-offline Mar 2 13:20:14.426184 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:14.426205 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:14.426473 ignition[701]: parsed url from cmdline: "" Mar 2 13:20:14.426480 ignition[701]: no config URL provided Mar 2 13:20:14.426491 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:20:14.426507 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:20:14.426547 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 2 13:20:14.426555 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:20:14.449591 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 2 13:20:14.450742 ignition[701]: parsing config with SHA512: aba739dffa1f3d1f76b86956adcdb2c486f1f7027c4814d99ded89733ff3c36720b238c049461290b770108b89747068bcd457150ad3afc41837ad718354e84a Mar 2 13:20:14.479777 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:20:14.501533 unknown[701]: fetched base config from "system" Mar 2 13:20:14.501588 unknown[701]: fetched user config from "qemu" Mar 2 13:20:14.502056 ignition[701]: fetch-offline: fetch-offline passed Mar 2 13:20:14.502183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:20:14.502169 ignition[701]: Ignition finished successfully Mar 2 13:20:14.537690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:20:14.542542 systemd-networkd[783]: lo: Link UP Mar 2 13:20:14.542547 systemd-networkd[783]: lo: Gained carrier Mar 2 13:20:14.544892 systemd-networkd[783]: Enumeration completed Mar 2 13:20:14.546173 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:20:14.547403 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:20:14.547409 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:20:14.550148 systemd-networkd[783]: eth0: Link UP Mar 2 13:20:14.550153 systemd-networkd[783]: eth0: Gained carrier Mar 2 13:20:14.550162 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:20:14.573031 systemd[1]: Reached target network.target - Network. Mar 2 13:20:14.643698 ignition[786]: Ignition 2.19.0 Mar 2 13:20:14.581917 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:20:14.643708 ignition[786]: Stage: kargs Mar 2 13:20:14.584563 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:20:14.644039 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:14.651238 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:20:14.644052 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:14.692458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:20:14.645098 ignition[786]: kargs: kargs passed Mar 2 13:20:14.698509 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:20:14.645161 ignition[786]: Ignition finished successfully Mar 2 13:20:14.735252 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:20:14.731385 ignition[794]: Ignition 2.19.0 Mar 2 13:20:14.744201 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:20:14.731393 ignition[794]: Stage: disks Mar 2 13:20:14.761216 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:20:14.731634 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:14.770337 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:20:14.731655 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:14.777346 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:20:14.732419 ignition[794]: disks: disks passed Mar 2 13:20:14.784375 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:20:14.732465 ignition[794]: Ignition finished successfully Mar 2 13:20:14.815022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:20:14.895757 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:20:14.896458 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:20:14.933018 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:20:15.163101 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:20:15.167077 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:20:15.179579 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:20:15.209397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:20:15.222329 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:20:15.243037 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 2 13:20:15.223023 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:20:15.274662 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:20:15.274694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:20:15.274705 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:20:15.223089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:20:15.297990 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:20:15.223124 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:20:15.315052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:20:15.341670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:20:15.372687 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:20:15.496722 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:20:15.518453 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:20:15.529711 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:20:15.538617 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:20:15.812055 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:20:15.842204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:20:15.843989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:20:15.869587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:20:15.886007 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:20:15.915436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:20:15.951414 ignition[927]: INFO : Ignition 2.19.0 Mar 2 13:20:15.951414 ignition[927]: INFO : Stage: mount Mar 2 13:20:15.951414 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:15.951414 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:15.983054 ignition[927]: INFO : mount: mount passed Mar 2 13:20:15.983054 ignition[927]: INFO : Ignition finished successfully Mar 2 13:20:15.988716 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:20:16.018954 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:20:16.181594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:20:16.207071 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Mar 2 13:20:16.218372 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:20:16.218444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:20:16.218457 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:20:16.242444 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:20:16.246634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:20:16.315068 ignition[955]: INFO : Ignition 2.19.0 Mar 2 13:20:16.315068 ignition[955]: INFO : Stage: files Mar 2 13:20:16.325429 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:16.325429 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:16.325429 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:20:16.325429 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:20:16.325429 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:20:16.366511 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:20:16.375131 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:20:16.388388 unknown[955]: wrote ssh authorized keys file for user: core Mar 2 13:20:16.395786 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:20:16.408091 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:20:16.422661 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:20:16.422661 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:20:16.448713 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:20:16.448713 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:20:16.473623 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:20:16.473623 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:20:16.486113 systemd-networkd[783]: eth0: Gained IPv6LL Mar 2 13:20:16.512534 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 13:20:16.792419 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 2 13:20:17.191564 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 13:20:17.191564 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 2 13:20:17.218091 ignition[955]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:20:17.218091 ignition[955]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:20:17.218091 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 2 13:20:17.218091 ignition[955]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:20:17.267076 ignition[955]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:20:17.283954 ignition[955]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:20:17.283954 ignition[955]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:20:17.283954 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:20:17.283954 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:20:17.283954 ignition[955]: INFO : files: files passed Mar 2 13:20:17.283954 ignition[955]: INFO : Ignition finished successfully Mar 2 13:20:17.274071 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:20:17.333375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:20:17.349016 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:20:17.363730 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:20:17.423215 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:20:17.363985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:20:17.440785 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:20:17.440785 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:20:17.390963 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:20:17.470663 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:20:17.412055 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:20:17.471368 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:20:17.527549 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:20:17.527988 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:20:17.544044 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:20:17.557676 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:20:17.563710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:20:17.589407 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:20:17.612442 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:20:17.631778 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:20:17.663451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:20:17.681604 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:20:17.701536 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:20:17.715231 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:20:17.721764 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:20:17.739951 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:20:17.752621 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:20:17.764121 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:20:17.780433 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:20:17.799976 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:20:17.814127 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:20:17.827378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:20:17.843247 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:20:17.858502 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:20:17.875113 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:20:17.887530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:20:17.896404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:20:17.911522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:20:17.928762 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:20:17.937038 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:20:17.952611 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:20:17.970171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:20:17.970493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:20:17.990137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:20:17.999213 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:20:18.014637 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:20:18.027691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:20:18.034606 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:20:18.054196 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:20:18.067422 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:20:18.078053 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:20:18.078204 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:20:18.099643 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:20:18.107945 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:20:18.121002 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:20:18.128991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:20:18.150901 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:20:18.151583 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:20:18.189481 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:20:18.206420 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:20:18.213094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:20:18.243436 ignition[1010]: INFO : Ignition 2.19.0 Mar 2 13:20:18.243436 ignition[1010]: INFO : Stage: umount Mar 2 13:20:18.243436 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:20:18.243436 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:20:18.243436 ignition[1010]: INFO : umount: umount passed Mar 2 13:20:18.243436 ignition[1010]: INFO : Ignition finished successfully Mar 2 13:20:18.213412 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:20:18.223232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:20:18.223497 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:20:18.247543 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:20:18.247952 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:20:18.258003 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:20:18.261954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:20:18.262178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:20:18.277230 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:20:18.277457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:20:18.293483 systemd[1]: Stopped target network.target - Network. Mar 2 13:20:18.305459 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:20:18.305563 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:20:18.318187 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:20:18.318264 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:20:18.328226 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:20:18.328390 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:20:18.333460 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:20:18.333551 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:20:18.341089 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:20:18.341175 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:20:18.350617 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:20:18.363722 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:20:18.373969 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 2 13:20:18.381757 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:20:18.382068 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:20:18.400124 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:20:18.401045 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:20:18.413178 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:20:18.413276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:20:18.467003 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:20:18.477655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:20:18.477742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:20:18.478077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:20:18.478127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:20:18.484712 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:20:18.484767 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:20:18.486745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:20:18.486906 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:20:18.487916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:20:18.518605 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:20:18.518958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:20:18.547700 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:20:18.548189 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:20:18.559167 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:20:18.559247 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:20:18.572944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:20:18.573017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:20:18.580108 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:20:18.917083 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 13:20:18.580200 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:20:18.597515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:20:18.597611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:20:18.612748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:20:18.612946 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:20:18.641749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:20:18.652560 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:20:18.652662 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:20:18.676024 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:20:18.676145 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:20:18.694523 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:20:18.694629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:20:18.720728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:20:18.722600 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:20:18.742781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:20:18.743118 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:20:18.757508 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:20:18.809771 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:20:18.843268 systemd[1]: Switching root. Mar 2 13:20:19.066543 systemd-journald[195]: Journal stopped Mar 2 13:20:21.302725 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:20:21.302971 kernel: SELinux: policy capability open_perms=1 Mar 2 13:20:21.302998 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:20:21.303015 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:20:21.303031 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:20:21.303046 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:20:21.303067 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:20:21.303084 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:20:21.303099 kernel: audit: type=1403 audit(1772457619.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:20:21.303123 systemd[1]: Successfully loaded SELinux policy in 92.299ms. Mar 2 13:20:21.303153 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.743ms. Mar 2 13:20:21.303171 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:20:21.303188 systemd[1]: Detected virtualization kvm. Mar 2 13:20:21.303205 systemd[1]: Detected architecture x86-64. Mar 2 13:20:21.303221 systemd[1]: Detected first boot. Mar 2 13:20:21.303238 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:20:21.303261 zram_generator::config[1056]: No configuration found. Mar 2 13:20:21.303279 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:20:21.303444 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:20:21.303466 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:20:21.303483 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:20:21.303506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:20:21.303523 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:20:21.303542 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:20:21.303559 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:20:21.303576 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:20:21.303593 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:20:21.303610 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:20:21.303627 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:20:21.303643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:20:21.303665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:20:21.303681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:20:21.303698 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:20:21.303718 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:20:21.303734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:20:21.303752 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:20:21.303770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:20:21.303787 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:20:21.303915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:20:21.303937 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:20:21.303953 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:20:21.303969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:20:21.303986 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:20:21.304002 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:20:21.304020 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:20:21.304036 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:20:21.304055 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:20:21.304071 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:20:21.304089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:20:21.304106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:20:21.304123 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:20:21.304140 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:20:21.304156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:20:21.304172 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:20:21.304196 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:20:21.304216 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:20:21.304232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:20:21.304249 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:20:21.304267 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:20:21.304284 systemd[1]: Reached target machines.target - Containers. Mar 2 13:20:21.304373 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:20:21.304390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:20:21.304406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:20:21.304423 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:20:21.304443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:20:21.304462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:20:21.304477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:20:21.304494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:20:21.304510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:20:21.304533 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:20:21.304550 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:20:21.304566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:20:21.304585 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:20:21.304604 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:20:21.304619 kernel: fuse: init (API version 7.39) Mar 2 13:20:21.304635 kernel: loop: module loaded Mar 2 13:20:21.304650 kernel: ACPI: bus type drm_connector registered Mar 2 13:20:21.304666 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:20:21.304683 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:20:21.304750 systemd-journald[1140]: Collecting audit messages is disabled. Mar 2 13:20:21.304788 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:20:21.304918 systemd-journald[1140]: Journal started Mar 2 13:20:21.304945 systemd-journald[1140]: Runtime Journal (/run/log/journal/6d1a57f59dae44e2a564fdd5d9b5ad43) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:20:20.289927 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:20:20.324951 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:20:20.326247 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:20:20.327101 systemd[1]: systemd-journald.service: Consumed 2.990s CPU time. Mar 2 13:20:21.364596 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:20:21.397368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:20:21.418981 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:20:21.419085 systemd[1]: Stopped verity-setup.service. Mar 2 13:20:21.443047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:20:21.450497 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:20:21.457900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:20:21.467924 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:20:21.481474 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:20:21.489784 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:20:21.497478 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:20:21.505087 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:20:21.511951 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:20:21.520476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:20:21.529245 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:20:21.529702 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:20:21.538484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:20:21.538998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:20:21.551057 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:20:21.551639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:20:21.565039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:20:21.565375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:20:21.578028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:20:21.578484 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:20:21.589936 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:20:21.590393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:20:21.599378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:20:21.609665 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:20:21.619264 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:20:21.656034 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:20:21.680189 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:20:21.703521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:20:21.713069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:20:21.713134 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:20:21.723640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:20:21.735787 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:20:21.747612 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:20:21.757077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:20:21.762752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:20:21.776636 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:20:21.784453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:20:21.791536 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:20:21.800662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:20:21.805187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:20:21.805527 systemd-journald[1140]: Time spent on flushing to /var/log/journal/6d1a57f59dae44e2a564fdd5d9b5ad43 is 55.156ms for 926 entries. Mar 2 13:20:21.805527 systemd-journald[1140]: System Journal (/var/log/journal/6d1a57f59dae44e2a564fdd5d9b5ad43) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:20:21.889177 systemd-journald[1140]: Received client request to flush runtime journal. Mar 2 13:20:21.827154 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:20:21.852506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:20:21.871004 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:20:21.882477 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:20:21.899035 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:20:21.909169 kernel: loop0: detected capacity change from 0 to 140768 Mar 2 13:20:21.918691 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:20:21.930997 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 2 13:20:21.931024 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 2 13:20:21.931550 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:20:21.942947 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:20:21.952141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:20:21.961653 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:20:21.977939 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:20:21.979097 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:20:21.998462 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:20:22.021989 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:20:22.045276 kernel: loop1: detected capacity change from 0 to 217752 Mar 2 13:20:22.044690 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:20:22.079398 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:20:22.081067 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:20:22.108053 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:20:22.119656 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:20:22.146144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:20:22.185911 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 2 13:20:22.186419 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 2 13:20:22.188984 kernel: loop2: detected capacity change from 0 to 142488 Mar 2 13:20:22.196683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:20:22.275137 kernel: loop3: detected capacity change from 0 to 140768 Mar 2 13:20:22.320487 kernel: loop4: detected capacity change from 0 to 217752 Mar 2 13:20:22.379973 kernel: loop5: detected capacity change from 0 to 142488 Mar 2 13:20:22.427545 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:20:22.428742 (sd-merge)[1197]: Merged extensions into '/usr'. Mar 2 13:20:22.434757 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:20:22.435210 systemd[1]: Reloading... Mar 2 13:20:22.577955 zram_generator::config[1226]: No configuration found. Mar 2 13:20:22.758428 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:20:22.845626 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:20:22.909222 systemd[1]: Reloading finished in 473 ms. Mar 2 13:20:22.963403 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:20:22.973277 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:20:22.987053 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:20:23.025502 systemd[1]: Starting ensure-sysext.service... Mar 2 13:20:23.049044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:20:23.068604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:20:23.083712 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:20:23.083786 systemd[1]: Reloading... Mar 2 13:20:23.111642 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:20:23.112491 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:20:23.114284 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:20:23.118446 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 13:20:23.118641 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 2 13:20:23.127099 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:20:23.127116 systemd-tmpfiles[1262]: Skipping /boot Mar 2 13:20:23.129508 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Mar 2 13:20:23.168483 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:20:23.168597 systemd-tmpfiles[1262]: Skipping /boot Mar 2 13:20:23.209205 zram_generator::config[1286]: No configuration found. Mar 2 13:20:23.372086 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1292) Mar 2 13:20:23.525192 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:20:23.525382 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:20:23.517096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:20:23.545090 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:20:23.545181 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:20:23.545560 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:20:23.633612 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:20:23.637942 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:20:23.638176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:20:23.654590 systemd[1]: Reloading finished in 570 ms. Mar 2 13:20:23.663143 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:20:23.691675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:20:23.721011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:20:23.779553 systemd[1]: Finished ensure-sysext.service. Mar 2 13:20:23.829621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:20:23.979890 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:20:23.998127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:20:24.013109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:20:24.019762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:20:24.038997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:20:24.166634 augenrules[1378]: No rules Mar 2 13:20:24.171180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:20:24.189949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:20:24.201183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:20:24.206653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:20:24.221454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:20:24.259057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:20:24.285541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:20:24.299278 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:20:24.320629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:20:24.337110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:20:24.353575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:20:24.359478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:20:24.372049 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:20:24.384415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:20:24.384747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:20:24.396711 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:20:24.397526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:20:24.409285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:20:24.409996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:20:24.423673 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:20:24.424363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:20:24.433389 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:20:24.446894 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:20:24.480440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:20:24.482510 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:20:24.551084 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:20:24.568218 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:20:24.568448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:20:24.569586 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:20:24.676562 kernel: kvm_amd: TSC scaling supported Mar 2 13:20:24.676692 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:20:24.676716 kernel: kvm_amd: Nested Paging enabled Mar 2 13:20:24.676740 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:20:24.676758 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:20:25.096605 systemd-networkd[1386]: lo: Link UP Mar 2 13:20:25.097171 systemd-networkd[1386]: lo: Gained carrier Mar 2 13:20:25.103267 systemd-networkd[1386]: Enumeration completed Mar 2 13:20:25.107752 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:20:25.107766 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:20:25.114585 systemd-networkd[1386]: eth0: Link UP Mar 2 13:20:25.114654 systemd-networkd[1386]: eth0: Gained carrier Mar 2 13:20:25.114687 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:20:25.127185 systemd-resolved[1388]: Positive Trust Anchors: Mar 2 13:20:25.127735 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:20:25.128101 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:20:25.144909 systemd-resolved[1388]: Defaulting to hostname 'linux'. Mar 2 13:20:25.240914 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:20:25.248005 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:20:25.249019 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:20:25.249624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:20:25.250592 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:20:25.266961 systemd[1]: Reached target network.target - Network. Mar 2 13:20:25.273215 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:20:25.273431 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:20:25.373210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:20:25.386077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:20:25.471188 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:20:25.473289 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Mar 2 13:20:25.478073 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:20:25.478250 systemd-timesyncd[1389]: Initial clock synchronization to Mon 2026-03-02 13:20:25.608655 UTC. Mar 2 13:20:25.528427 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:20:25.570597 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:20:25.595501 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:20:25.626054 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:20:25.669217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:20:25.681452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:20:25.690210 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:20:25.699720 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:20:25.707050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:20:25.715095 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:20:25.723421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:20:25.732131 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:20:25.740052 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:20:25.740165 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:20:25.747394 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:20:25.756740 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:20:25.768243 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:20:25.787261 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:20:25.805155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:20:25.815557 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:20:25.828294 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:20:25.837568 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:20:25.847200 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:20:25.854712 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:20:25.864523 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:20:25.865164 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:20:25.889457 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:20:25.904961 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:20:25.925112 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:20:25.936729 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:20:25.941170 jq[1430]: false Mar 2 13:20:25.945602 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:20:25.948957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:20:25.962707 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:20:25.966678 dbus-daemon[1429]: [system] SELinux support is enabled Mar 2 13:20:25.978748 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:20:25.981044 extend-filesystems[1431]: Found loop3 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found loop4 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found loop5 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found sr0 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda1 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda2 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda3 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found usr Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda4 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda6 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda7 Mar 2 13:20:25.981044 extend-filesystems[1431]: Found vda9 Mar 2 13:20:25.981044 extend-filesystems[1431]: Checking size of /dev/vda9 Mar 2 13:20:26.195313 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:20:26.195357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1303) Mar 2 13:20:26.022532 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:20:26.195609 extend-filesystems[1431]: Resized partition /dev/vda9 Mar 2 13:20:26.090497 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:20:26.208385 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:20:26.091411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:20:26.270106 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:20:26.350598 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:20:26.113937 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:20:26.367311 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:20:26.367311 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:20:26.367311 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:20:26.404315 update_engine[1450]: I20260302 13:20:26.137133 1450 main.cc:92] Flatcar Update Engine starting Mar 2 13:20:26.404315 update_engine[1450]: I20260302 13:20:26.139274 1450 update_check_scheduler.cc:74] Next update check in 6m22s Mar 2 13:20:26.144035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:20:26.420606 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Mar 2 13:20:26.434789 jq[1452]: true Mar 2 13:20:26.158669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:20:26.173373 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:20:26.450746 jq[1454]: true Mar 2 13:20:26.184776 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:20:26.184945 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:20:26.185971 systemd-logind[1444]: New seat seat0. Mar 2 13:20:26.223527 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:20:26.232349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:20:26.232764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:20:26.233455 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:20:26.233800 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:20:26.240487 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:20:26.240926 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:20:26.300696 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:20:26.333305 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:20:26.333499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:20:26.349643 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:20:26.350314 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:20:26.350485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:20:26.406467 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:20:26.429463 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:20:26.429753 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:20:26.468511 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:20:26.474698 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:20:26.476534 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:20:26.507272 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:20:26.521625 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:20:26.534105 systemd-networkd[1386]: eth0: Gained IPv6LL Mar 2 13:20:26.541086 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:20:26.553136 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:20:26.571998 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:20:26.597321 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:20:26.609190 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:20:26.620399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:20:26.635421 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:20:26.648426 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). Mar 2 13:20:26.663252 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:20:26.663741 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:20:26.709370 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:20:26.723093 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:20:26.732716 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:20:26.733257 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:20:26.747505 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:20:26.758631 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:20:26.795778 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:20:26.810118 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:20:26.821094 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:20:26.879632 containerd[1459]: time="2026-03-02T13:20:26.877295582Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:20:26.907115 sshd[1508]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:26.908944 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:26.934355 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:20:26.939044 containerd[1459]: time="2026-03-02T13:20:26.934677963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.949067 containerd[1459]: time="2026-03-02T13:20:26.948324708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:20:26.949067 containerd[1459]: time="2026-03-02T13:20:26.948888830Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:20:26.949067 containerd[1459]: time="2026-03-02T13:20:26.948925947Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:20:26.949267 containerd[1459]: time="2026-03-02T13:20:26.949140642Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:20:26.949267 containerd[1459]: time="2026-03-02T13:20:26.949158982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.949267 containerd[1459]: time="2026-03-02T13:20:26.949224794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:20:26.949267 containerd[1459]: time="2026-03-02T13:20:26.949236841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.950680 containerd[1459]: time="2026-03-02T13:20:26.949524140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:20:26.950680 containerd[1459]: time="2026-03-02T13:20:26.950548824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.950680 containerd[1459]: time="2026-03-02T13:20:26.950650298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:20:26.950680 containerd[1459]: time="2026-03-02T13:20:26.950665755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.950989 containerd[1459]: time="2026-03-02T13:20:26.950914797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.951375 containerd[1459]: time="2026-03-02T13:20:26.951323366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:20:26.951698 containerd[1459]: time="2026-03-02T13:20:26.951543958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:20:26.952002 containerd[1459]: time="2026-03-02T13:20:26.951695684Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:20:26.952002 containerd[1459]: time="2026-03-02T13:20:26.951956529Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:20:26.952440 containerd[1459]: time="2026-03-02T13:20:26.952151470Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:20:26.964056 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:20:26.979167 systemd-logind[1444]: New session 1 of user core. Mar 2 13:20:26.989402 containerd[1459]: time="2026-03-02T13:20:26.989130625Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:20:26.989499 containerd[1459]: time="2026-03-02T13:20:26.989437018Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:20:26.989499 containerd[1459]: time="2026-03-02T13:20:26.989469828Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:20:26.989592 containerd[1459]: time="2026-03-02T13:20:26.989498462Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:20:26.989592 containerd[1459]: time="2026-03-02T13:20:26.989521852Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:20:26.989998 containerd[1459]: time="2026-03-02T13:20:26.989760447Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:20:26.990531 containerd[1459]: time="2026-03-02T13:20:26.990291444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:20:26.990715 containerd[1459]: time="2026-03-02T13:20:26.990642092Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:20:26.990758 containerd[1459]: time="2026-03-02T13:20:26.990718525Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:20:26.990758 containerd[1459]: time="2026-03-02T13:20:26.990739298Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:20:26.990758 containerd[1459]: time="2026-03-02T13:20:26.990756863Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990781485Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990797066Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990908965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990930940Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990948302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.990968 containerd[1459]: time="2026-03-02T13:20:26.990968057Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.990984115Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991008523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991024877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991040253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991055731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991072768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991091850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991107124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.991122 containerd[1459]: time="2026-03-02T13:20:26.991123498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991140361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991160198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991177101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991268197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991292829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991314683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991341738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991356768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991511284Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991567574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991589213Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991603153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:20:26.992161 containerd[1459]: time="2026-03-02T13:20:26.991621972Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:20:26.992780 containerd[1459]: time="2026-03-02T13:20:26.991637174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.992780 containerd[1459]: time="2026-03-02T13:20:26.991654302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:20:26.992780 containerd[1459]: time="2026-03-02T13:20:26.991677346Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:20:26.992780 containerd[1459]: time="2026-03-02T13:20:26.991694841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:20:26.994179 containerd[1459]: time="2026-03-02T13:20:26.994031010Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:20:26.994470 containerd[1459]: time="2026-03-02T13:20:26.994182164Z" level=info msg="Connect containerd service" Mar 2 13:20:26.994470 containerd[1459]: time="2026-03-02T13:20:26.994241440Z" level=info msg="using legacy CRI server" Mar 2 13:20:26.994470 containerd[1459]: time="2026-03-02T13:20:26.994253781Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:20:26.994470 containerd[1459]: time="2026-03-02T13:20:26.994424100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.995472662Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996075674Z" level=info msg="Start subscribing containerd event" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996130457Z" level=info msg="Start recovering state" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996201920Z" level=info msg="Start event monitor" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996216543Z" level=info msg="Start snapshots syncer" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996231807Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:20:26.997922 containerd[1459]: time="2026-03-02T13:20:26.996244241Z" level=info msg="Start streaming server" Mar 2 13:20:27.002628 containerd[1459]: time="2026-03-02T13:20:27.000769407Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:20:27.002628 containerd[1459]: time="2026-03-02T13:20:27.001235417Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:20:27.005122 containerd[1459]: time="2026-03-02T13:20:27.004666259Z" level=info msg="containerd successfully booted in 0.142043s" Mar 2 13:20:27.004942 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:20:27.025528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:20:27.050556 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:20:27.095281 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:20:27.336338 systemd[1534]: Queued start job for default target default.target. Mar 2 13:20:27.348635 systemd[1534]: Created slice app.slice - User Application Slice. Mar 2 13:20:27.348738 systemd[1534]: Reached target paths.target - Paths. Mar 2 13:20:27.348760 systemd[1534]: Reached target timers.target - Timers. Mar 2 13:20:27.360689 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:20:27.393055 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:20:27.393259 systemd[1534]: Reached target sockets.target - Sockets. Mar 2 13:20:27.393281 systemd[1534]: Reached target basic.target - Basic System. Mar 2 13:20:27.393343 systemd[1534]: Reached target default.target - Main User Target. Mar 2 13:20:27.393400 systemd[1534]: Startup finished in 269ms. Mar 2 13:20:27.394125 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:20:27.422558 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:20:27.536735 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Mar 2 13:20:27.642619 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:27.648737 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:27.669287 systemd-logind[1444]: New session 2 of user core. Mar 2 13:20:27.679561 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:20:27.775927 sshd[1545]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:27.787204 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:34868.service: Deactivated successfully. Mar 2 13:20:27.790199 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:20:27.795759 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:20:27.808220 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:34876.service - OpenSSH per-connection server daemon (10.0.0.1:34876). Mar 2 13:20:27.822231 systemd-logind[1444]: Removed session 2. Mar 2 13:20:27.858632 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:27.862757 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:27.882078 systemd-logind[1444]: New session 3 of user core. Mar 2 13:20:27.898224 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:20:27.980334 sshd[1552]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:27.987309 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:34876.service: Deactivated successfully. Mar 2 13:20:27.992965 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:20:27.996600 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:20:28.001706 systemd-logind[1444]: Removed session 3. Mar 2 13:20:28.580560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:20:28.591498 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:20:28.592032 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:20:28.603633 systemd[1]: Startup finished in 3.068s (kernel) + 10.238s (initrd) + 9.499s (userspace) = 22.807s. Mar 2 13:20:29.627890 kubelet[1563]: E0302 13:20:29.625411 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:20:29.636174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:20:29.636521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:20:29.637410 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Mar 2 13:20:38.162240 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:51842.service - OpenSSH per-connection server daemon (10.0.0.1:51842). Mar 2 13:20:38.468576 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 51842 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:38.471936 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:38.518430 systemd-logind[1444]: New session 4 of user core. Mar 2 13:20:38.561046 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:20:38.792772 sshd[1577]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:38.862474 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:51842.service: Deactivated successfully. Mar 2 13:20:38.882537 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:20:38.954448 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:20:38.986100 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:36234.service - OpenSSH per-connection server daemon (10.0.0.1:36234). Mar 2 13:20:39.029952 systemd-logind[1444]: Removed session 4. Mar 2 13:20:39.381408 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:39.392715 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:39.475121 systemd-logind[1444]: New session 5 of user core. Mar 2 13:20:39.492356 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:20:39.677672 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:39.743633 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:36234.service: Deactivated successfully. Mar 2 13:20:39.752239 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:20:39.765351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:20:39.770734 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:20:39.796380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:20:39.842190 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:36244.service - OpenSSH per-connection server daemon (10.0.0.1:36244). Mar 2 13:20:39.863601 systemd-logind[1444]: Removed session 5. Mar 2 13:20:39.976178 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 36244 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:39.979208 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:40.012080 systemd-logind[1444]: New session 6 of user core. Mar 2 13:20:40.044064 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:20:40.184315 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:40.245443 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:36256.service - OpenSSH per-connection server daemon (10.0.0.1:36256). Mar 2 13:20:40.251397 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:36244.service: Deactivated successfully. Mar 2 13:20:40.256623 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:20:40.267659 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:20:40.282602 systemd-logind[1444]: Removed session 6. Mar 2 13:20:40.438727 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 36256 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:40.442666 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:40.459009 systemd-logind[1444]: New session 7 of user core. Mar 2 13:20:40.486616 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:20:40.736692 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:20:40.739670 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:20:40.747455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:20:40.785387 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:20:40.808730 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 2 13:20:40.845175 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:41.039682 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:36256.service: Deactivated successfully. Mar 2 13:20:41.056223 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:20:41.069720 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:20:41.136273 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:36258.service - OpenSSH per-connection server daemon (10.0.0.1:36258). Mar 2 13:20:41.158189 systemd-logind[1444]: Removed session 7. Mar 2 13:20:41.386353 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 36258 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:41.395302 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:41.452418 systemd-logind[1444]: New session 8 of user core. Mar 2 13:20:41.462192 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:20:41.602167 kubelet[1611]: E0302 13:20:41.594776 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:20:41.633776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:20:41.634325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:20:41.661644 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:20:41.662345 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:20:41.688297 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 2 13:20:41.733945 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:20:41.734577 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:20:41.850152 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:20:41.868421 auditctl[1632]: No rules Mar 2 13:20:41.869424 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:20:41.870055 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:20:41.889048 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:20:41.962065 kernel: hrtimer: interrupt took 25850154 ns Mar 2 13:20:42.299542 augenrules[1650]: No rules Mar 2 13:20:42.302937 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:20:42.307039 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 2 13:20:42.362023 sshd[1622]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:42.440594 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:36258.service: Deactivated successfully. Mar 2 13:20:42.446287 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:20:42.471413 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:20:42.573579 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:36272.service - OpenSSH per-connection server daemon (10.0.0.1:36272). Mar 2 13:20:42.594443 systemd-logind[1444]: Removed session 8. Mar 2 13:20:42.939469 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 36272 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:20:42.975116 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:20:43.055560 systemd-logind[1444]: New session 9 of user core. Mar 2 13:20:43.080773 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:20:43.255556 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:20:43.267175 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:20:43.398705 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:20:43.543609 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:20:43.544186 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:20:47.348267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:20:47.371212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:20:47.512310 systemd[1]: Reloading requested from client PID 1713 ('systemctl') (unit session-9.scope)... Mar 2 13:20:47.512432 systemd[1]: Reloading... Mar 2 13:20:47.677193 zram_generator::config[1749]: No configuration found. Mar 2 13:20:47.940582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:20:48.144732 systemd[1]: Reloading finished in 631 ms. Mar 2 13:20:48.292360 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:20:48.292761 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:20:48.293386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:20:48.305020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:20:48.602432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:20:48.610983 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:20:49.436085 kubelet[1799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:20:49.731780 kubelet[1799]: I0302 13:20:49.731336 1799 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:20:49.731780 kubelet[1799]: I0302 13:20:49.731455 1799 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:20:49.733721 kubelet[1799]: I0302 13:20:49.733535 1799 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:20:49.733721 kubelet[1799]: I0302 13:20:49.733612 1799 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:20:49.734374 kubelet[1799]: I0302 13:20:49.734262 1799 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:20:49.761098 kubelet[1799]: I0302 13:20:49.760557 1799 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:20:49.773528 kubelet[1799]: E0302 13:20:49.773408 1799 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:20:49.773528 kubelet[1799]: I0302 13:20:49.773608 1799 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:20:49.789082 kubelet[1799]: I0302 13:20:49.788029 1799 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:20:49.789358 kubelet[1799]: I0302 13:20:49.789261 1799 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:20:49.790203 kubelet[1799]: I0302 13:20:49.789309 1799 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.122","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:20:49.790203 kubelet[1799]: I0302 13:20:49.789558 1799 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:20:49.790203 kubelet[1799]: I0302 13:20:49.789574 1799 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:20:49.790203 kubelet[1799]: I0302 13:20:49.789722 1799 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:20:49.889127 kubelet[1799]: I0302 13:20:49.886197 1799 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:20:49.889127 kubelet[1799]: I0302 13:20:49.886718 1799 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:20:49.889127 kubelet[1799]: I0302 13:20:49.886738 1799 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:20:49.889127 kubelet[1799]: I0302 13:20:49.886788 1799 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:20:49.889127 kubelet[1799]: I0302 13:20:49.887029 1799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:20:49.889127 kubelet[1799]: E0302 13:20:49.888293 1799 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:49.889127 kubelet[1799]: E0302 13:20:49.888466 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:49.894651 kubelet[1799]: I0302 13:20:49.894334 1799 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:20:49.900478 kubelet[1799]: I0302 13:20:49.898706 1799 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:20:49.900478 kubelet[1799]: I0302 13:20:49.899243 1799 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:20:49.900478 kubelet[1799]: W0302 13:20:49.899378 1799 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:20:49.910482 kubelet[1799]: I0302 13:20:49.910442 1799 server.go:1257] "Started kubelet" Mar 2 13:20:49.912649 kubelet[1799]: I0302 13:20:49.912584 1799 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:20:49.917103 kubelet[1799]: I0302 13:20:49.916448 1799 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:20:49.918256 kubelet[1799]: I0302 13:20:49.917412 1799 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:20:49.921335 kubelet[1799]: I0302 13:20:49.916162 1799 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:20:49.928766 kubelet[1799]: I0302 13:20:49.927612 1799 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:20:49.929646 kubelet[1799]: I0302 13:20:49.929524 1799 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:20:49.941376 kubelet[1799]: I0302 13:20:49.941279 1799 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:20:49.945308 kubelet[1799]: I0302 13:20:49.945226 1799 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:20:49.945513 kubelet[1799]: I0302 13:20:49.945422 1799 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:20:49.946389 kubelet[1799]: I0302 13:20:49.946304 1799 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:20:49.946580 kubelet[1799]: I0302 13:20:49.946499 1799 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:20:49.946656 kubelet[1799]: I0302 13:20:49.946635 1799 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:20:49.947110 kubelet[1799]: E0302 13:20:49.946961 1799 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" Mar 2 13:20:49.951614 kubelet[1799]: I0302 13:20:49.951495 1799 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:20:49.969369 kubelet[1799]: E0302 13:20:49.968964 1799 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:20:49.988553 kubelet[1799]: I0302 13:20:49.988341 1799 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:20:49.988553 kubelet[1799]: I0302 13:20:49.988426 1799 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:20:49.988553 kubelet[1799]: I0302 13:20:49.988450 1799 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:20:50.002207 kubelet[1799]: I0302 13:20:49.999387 1799 policy_none.go:50] "Start" Mar 2 13:20:50.002207 kubelet[1799]: I0302 13:20:49.999705 1799 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:20:50.002207 kubelet[1799]: I0302 13:20:49.999726 1799 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:20:50.004291 kubelet[1799]: E0302 13:20:50.003658 1799 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.122\" not found" node="10.0.0.122" Mar 2 13:20:50.006423 kubelet[1799]: I0302 13:20:50.006356 1799 policy_none.go:44] "Start" Mar 2 13:20:50.027784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:20:50.047382 kubelet[1799]: E0302 13:20:50.047306 1799 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" Mar 2 13:20:50.050973 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:20:50.061582 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:20:50.075358 kubelet[1799]: E0302 13:20:50.074956 1799 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:20:50.075358 kubelet[1799]: I0302 13:20:50.075236 1799 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:20:50.075358 kubelet[1799]: I0302 13:20:50.075248 1799 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:20:50.076023 kubelet[1799]: I0302 13:20:50.076004 1799 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:20:50.080597 kubelet[1799]: E0302 13:20:50.079636 1799 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:20:50.080597 kubelet[1799]: E0302 13:20:50.079998 1799 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.122\" not found" Mar 2 13:20:50.116037 sudo[1661]: pam_unix(sudo:session): session closed for user root Mar 2 13:20:50.122595 sshd[1658]: pam_unix(sshd:session): session closed for user core Mar 2 13:20:50.132542 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:36272.service: Deactivated successfully. Mar 2 13:20:50.135955 kubelet[1799]: I0302 13:20:50.135721 1799 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:20:50.139627 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:20:50.140410 systemd[1]: session-9.scope: Consumed 3.728s CPU time, 78.1M memory peak, 0B memory swap peak. Mar 2 13:20:50.142380 kubelet[1799]: I0302 13:20:50.142298 1799 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:20:50.142380 kubelet[1799]: I0302 13:20:50.142376 1799 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:20:50.142530 kubelet[1799]: I0302 13:20:50.142408 1799 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:20:50.142576 kubelet[1799]: E0302 13:20:50.142527 1799 kubelet.go:2525] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 2 13:20:50.148530 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:20:50.154719 systemd-logind[1444]: Removed session 9. Mar 2 13:20:50.182335 kubelet[1799]: I0302 13:20:50.182208 1799 kubelet_node_status.go:74] "Attempting to register node" node="10.0.0.122" Mar 2 13:20:50.195443 kubelet[1799]: I0302 13:20:50.195271 1799 kubelet_node_status.go:77] "Successfully registered node" node="10.0.0.122" Mar 2 13:20:50.195443 kubelet[1799]: E0302 13:20:50.195326 1799 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"10.0.0.122\": node \"10.0.0.122\" not found" Mar 2 13:20:50.221483 kubelet[1799]: I0302 13:20:50.221270 1799 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 2 13:20:50.222344 containerd[1459]: time="2026-03-02T13:20:50.222075332Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:20:50.222996 kubelet[1799]: I0302 13:20:50.222633 1799 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 2 13:20:50.738185 kubelet[1799]: I0302 13:20:50.738020 1799 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 2 13:20:50.739231 kubelet[1799]: I0302 13:20:50.738492 1799 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:161: Unexpected watch close - watch lasted less than a second and no items received" Mar 2 13:20:50.739231 kubelet[1799]: I0302 13:20:50.738610 1799 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:161: Unexpected watch close - watch lasted less than a second and no items received" Mar 2 13:20:50.739231 kubelet[1799]: I0302 13:20:50.738997 1799 reflector.go:578] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:161: Unexpected watch close - watch lasted less than a second and no items received" Mar 2 13:20:50.888973 kubelet[1799]: E0302 13:20:50.888732 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:50.888973 kubelet[1799]: I0302 13:20:50.888984 1799 apiserver.go:52] "Watching apiserver" Mar 2 13:20:50.933279 systemd[1]: Created slice kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice - libcontainer container kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice. Mar 2 13:20:50.949507 kubelet[1799]: I0302 13:20:50.948777 1799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:20:50.953604 kubelet[1799]: I0302 13:20:50.953492 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-etc-cni-netd\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.954184 kubelet[1799]: I0302 13:20:50.954075 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-xtables-lock\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.954939 kubelet[1799]: I0302 13:20:50.954359 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-hubble-tls\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.955640 kubelet[1799]: I0302 13:20:50.955435 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9555d311-5719-4132-a8c7-b5935050ab5f-kube-proxy\") pod \"kube-proxy-mxdhf\" (UID: \"9555d311-5719-4132-a8c7-b5935050ab5f\") " pod="kube-system/kube-proxy-mxdhf" Mar 2 13:20:50.956114 kubelet[1799]: I0302 13:20:50.955738 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9555d311-5719-4132-a8c7-b5935050ab5f-lib-modules\") pod \"kube-proxy-mxdhf\" (UID: \"9555d311-5719-4132-a8c7-b5935050ab5f\") " pod="kube-system/kube-proxy-mxdhf" Mar 2 13:20:50.956114 kubelet[1799]: I0302 13:20:50.955950 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcrt\" (UniqueName: \"kubernetes.io/projected/9555d311-5719-4132-a8c7-b5935050ab5f-kube-api-access-fmcrt\") pod \"kube-proxy-mxdhf\" (UID: \"9555d311-5719-4132-a8c7-b5935050ab5f\") " pod="kube-system/kube-proxy-mxdhf" Mar 2 13:20:50.956114 kubelet[1799]: I0302 13:20:50.956045 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-run\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956114 kubelet[1799]: I0302 13:20:50.956080 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-hostproc\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956279 kubelet[1799]: I0302 13:20:50.956178 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-lib-modules\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956279 kubelet[1799]: I0302 13:20:50.956209 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6030bdb5-024c-411d-863f-2a21e280ca68-clustermesh-secrets\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956780 kubelet[1799]: I0302 13:20:50.956488 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-net\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956780 kubelet[1799]: I0302 13:20:50.956709 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-bpf-maps\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956780 kubelet[1799]: I0302 13:20:50.956742 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-cgroup\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.956780 kubelet[1799]: I0302 13:20:50.956767 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzj6\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-kube-api-access-xjzj6\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.957073 kubelet[1799]: I0302 13:20:50.956991 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-config-path\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.957073 kubelet[1799]: I0302 13:20:50.957021 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9555d311-5719-4132-a8c7-b5935050ab5f-xtables-lock\") pod \"kube-proxy-mxdhf\" (UID: \"9555d311-5719-4132-a8c7-b5935050ab5f\") " pod="kube-system/kube-proxy-mxdhf" Mar 2 13:20:50.957073 kubelet[1799]: I0302 13:20:50.957044 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-kernel\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.957073 kubelet[1799]: I0302 13:20:50.957067 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cni-path\") pod \"cilium-8kns8\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " pod="kube-system/cilium-8kns8" Mar 2 13:20:50.960774 systemd[1]: Created slice kubepods-besteffort-pod9555d311_5719_4132_a8c7_b5935050ab5f.slice - libcontainer container kubepods-besteffort-pod9555d311_5719_4132_a8c7_b5935050ab5f.slice. Mar 2 13:20:51.261022 kubelet[1799]: E0302 13:20:51.260156 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:20:51.262626 containerd[1459]: time="2026-03-02T13:20:51.261536280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8kns8,Uid:6030bdb5-024c-411d-863f-2a21e280ca68,Namespace:kube-system,Attempt:0,}" Mar 2 13:20:51.284440 kubelet[1799]: E0302 13:20:51.284310 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:20:51.285343 containerd[1459]: time="2026-03-02T13:20:51.285224715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxdhf,Uid:9555d311-5719-4132-a8c7-b5935050ab5f,Namespace:kube-system,Attempt:0,}" Mar 2 13:20:51.889279 kubelet[1799]: E0302 13:20:51.889125 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:52.164189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304247701.mount: Deactivated successfully. Mar 2 13:20:52.179931 containerd[1459]: time="2026-03-02T13:20:52.179338054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:20:52.184217 containerd[1459]: time="2026-03-02T13:20:52.184062456Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:20:52.186203 containerd[1459]: time="2026-03-02T13:20:52.186076463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:20:52.187659 containerd[1459]: time="2026-03-02T13:20:52.187491322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:20:52.189055 containerd[1459]: time="2026-03-02T13:20:52.188960390Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:20:52.193987 containerd[1459]: time="2026-03-02T13:20:52.193578941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:20:52.195863 containerd[1459]: time="2026-03-02T13:20:52.195626341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 910.257946ms" Mar 2 13:20:52.199673 containerd[1459]: time="2026-03-02T13:20:52.199528907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 937.923427ms" Mar 2 13:20:52.341598 containerd[1459]: time="2026-03-02T13:20:52.340931074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:20:52.341598 containerd[1459]: time="2026-03-02T13:20:52.341239927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:20:52.341598 containerd[1459]: time="2026-03-02T13:20:52.341263112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:20:52.342303 containerd[1459]: time="2026-03-02T13:20:52.341756985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:20:52.345146 containerd[1459]: time="2026-03-02T13:20:52.344024060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:20:52.345146 containerd[1459]: time="2026-03-02T13:20:52.344103570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:20:52.345146 containerd[1459]: time="2026-03-02T13:20:52.344126353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:20:52.345146 containerd[1459]: time="2026-03-02T13:20:52.344683295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:20:52.455395 systemd[1]: Started cri-containerd-26737f0cd6e352f4ff53e8359ea8c5cd1ee8ffb88db85b61d267f98182ffa8e2.scope - libcontainer container 26737f0cd6e352f4ff53e8359ea8c5cd1ee8ffb88db85b61d267f98182ffa8e2. Mar 2 13:20:52.458702 systemd[1]: Started cri-containerd-a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5.scope - libcontainer container a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5. Mar 2 13:20:52.530382 containerd[1459]: time="2026-03-02T13:20:52.530346040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8kns8,Uid:6030bdb5-024c-411d-863f-2a21e280ca68,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\"" Mar 2 13:20:52.533089 kubelet[1799]: E0302 13:20:52.532654 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:20:52.538040 containerd[1459]: time="2026-03-02T13:20:52.537996054Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:20:52.541312 containerd[1459]: time="2026-03-02T13:20:52.541281576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxdhf,Uid:9555d311-5719-4132-a8c7-b5935050ab5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"26737f0cd6e352f4ff53e8359ea8c5cd1ee8ffb88db85b61d267f98182ffa8e2\"" Mar 2 13:20:52.544225 kubelet[1799]: E0302 13:20:52.544038 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:20:52.891116 kubelet[1799]: E0302 13:20:52.890431 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:53.891281 kubelet[1799]: E0302 13:20:53.890933 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:54.892217 kubelet[1799]: E0302 13:20:54.891203 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:55.892761 kubelet[1799]: E0302 13:20:55.892467 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:56.893709 kubelet[1799]: E0302 13:20:56.893581 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:57.691530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970901458.mount: Deactivated successfully. Mar 2 13:20:57.894291 kubelet[1799]: E0302 13:20:57.893985 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:58.894370 kubelet[1799]: E0302 13:20:58.894254 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:20:59.895202 kubelet[1799]: E0302 13:20:59.895153 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:00.896192 kubelet[1799]: E0302 13:21:00.895933 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:01.896614 kubelet[1799]: E0302 13:21:01.896437 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:02.185144 containerd[1459]: time="2026-03-02T13:21:02.183302986Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:21:02.191358 containerd[1459]: time="2026-03-02T13:21:02.190504314Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:21:02.194652 containerd[1459]: time="2026-03-02T13:21:02.194046458Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:21:02.197046 containerd[1459]: time="2026-03-02T13:21:02.196733168Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.65860308s" Mar 2 13:21:02.197046 containerd[1459]: time="2026-03-02T13:21:02.196983016Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:21:02.211152 containerd[1459]: time="2026-03-02T13:21:02.210153475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 13:21:02.224428 containerd[1459]: time="2026-03-02T13:21:02.221350769Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:21:02.262634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683659928.mount: Deactivated successfully. Mar 2 13:21:02.286319 containerd[1459]: time="2026-03-02T13:21:02.286123252Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\"" Mar 2 13:21:02.291329 containerd[1459]: time="2026-03-02T13:21:02.290594442Z" level=info msg="StartContainer for \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\"" Mar 2 13:21:02.385490 systemd[1]: Started cri-containerd-db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163.scope - libcontainer container db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163. Mar 2 13:21:02.469649 containerd[1459]: time="2026-03-02T13:21:02.468765636Z" level=info msg="StartContainer for \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\" returns successfully" Mar 2 13:21:02.499139 systemd[1]: cri-containerd-db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163.scope: Deactivated successfully. Mar 2 13:21:02.739626 containerd[1459]: time="2026-03-02T13:21:02.739106721Z" level=info msg="shim disconnected" id=db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163 namespace=k8s.io Mar 2 13:21:02.739626 containerd[1459]: time="2026-03-02T13:21:02.739189276Z" level=warning msg="cleaning up after shim disconnected" id=db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163 namespace=k8s.io Mar 2 13:21:02.739626 containerd[1459]: time="2026-03-02T13:21:02.739206670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:21:02.900629 kubelet[1799]: E0302 13:21:02.899229 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:03.220983 kubelet[1799]: E0302 13:21:03.220536 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:03.240261 containerd[1459]: time="2026-03-02T13:21:03.237264635Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:21:03.264662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163-rootfs.mount: Deactivated successfully. Mar 2 13:21:03.356258 containerd[1459]: time="2026-03-02T13:21:03.356191358Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\"" Mar 2 13:21:03.384652 containerd[1459]: time="2026-03-02T13:21:03.380180866Z" level=info msg="StartContainer for \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\"" Mar 2 13:21:03.484285 systemd[1]: Started cri-containerd-6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1.scope - libcontainer container 6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1. Mar 2 13:21:03.588719 containerd[1459]: time="2026-03-02T13:21:03.588437132Z" level=info msg="StartContainer for \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\" returns successfully" Mar 2 13:21:03.631529 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:21:03.633529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:21:03.633634 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:21:03.660453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:21:03.663275 systemd[1]: cri-containerd-6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1.scope: Deactivated successfully. Mar 2 13:21:03.716501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:21:03.805346 containerd[1459]: time="2026-03-02T13:21:03.805078212Z" level=info msg="shim disconnected" id=6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1 namespace=k8s.io Mar 2 13:21:03.805346 containerd[1459]: time="2026-03-02T13:21:03.805205474Z" level=warning msg="cleaning up after shim disconnected" id=6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1 namespace=k8s.io Mar 2 13:21:03.805346 containerd[1459]: time="2026-03-02T13:21:03.805223259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:21:03.901237 kubelet[1799]: E0302 13:21:03.901181 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:04.233568 kubelet[1799]: E0302 13:21:04.233368 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:04.248724 containerd[1459]: time="2026-03-02T13:21:04.248580891Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:21:04.263533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1-rootfs.mount: Deactivated successfully. Mar 2 13:21:04.305531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211096801.mount: Deactivated successfully. Mar 2 13:21:04.323663 containerd[1459]: time="2026-03-02T13:21:04.323603197Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\"" Mar 2 13:21:04.327931 containerd[1459]: time="2026-03-02T13:21:04.325237905Z" level=info msg="StartContainer for \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\"" Mar 2 13:21:04.431349 systemd[1]: Started cri-containerd-716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1.scope - libcontainer container 716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1. Mar 2 13:21:04.508003 systemd[1]: cri-containerd-716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1.scope: Deactivated successfully. Mar 2 13:21:04.511237 containerd[1459]: time="2026-03-02T13:21:04.511001461Z" level=info msg="StartContainer for \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\" returns successfully" Mar 2 13:21:04.668248 containerd[1459]: time="2026-03-02T13:21:04.667564223Z" level=info msg="shim disconnected" id=716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1 namespace=k8s.io Mar 2 13:21:04.668248 containerd[1459]: time="2026-03-02T13:21:04.667688539Z" level=warning msg="cleaning up after shim disconnected" id=716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1 namespace=k8s.io Mar 2 13:21:04.668248 containerd[1459]: time="2026-03-02T13:21:04.667702116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:21:04.904461 kubelet[1799]: E0302 13:21:04.904408 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:05.252481 kubelet[1799]: E0302 13:21:05.248076 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:05.260254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1-rootfs.mount: Deactivated successfully. Mar 2 13:21:05.260419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888108716.mount: Deactivated successfully. Mar 2 13:21:05.272478 containerd[1459]: time="2026-03-02T13:21:05.272356646Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:21:05.321345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326467032.mount: Deactivated successfully. Mar 2 13:21:05.338456 containerd[1459]: time="2026-03-02T13:21:05.337684956Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\"" Mar 2 13:21:05.339547 containerd[1459]: time="2026-03-02T13:21:05.339392523Z" level=info msg="StartContainer for \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\"" Mar 2 13:21:05.425667 systemd[1]: Started cri-containerd-9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723.scope - libcontainer container 9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723. Mar 2 13:21:05.519435 systemd[1]: cri-containerd-9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723.scope: Deactivated successfully. Mar 2 13:21:05.528255 containerd[1459]: time="2026-03-02T13:21:05.526302987Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice/cri-containerd-9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723.scope/cgroup.events\": no such file or directory" Mar 2 13:21:05.531095 containerd[1459]: time="2026-03-02T13:21:05.531015719Z" level=info msg="StartContainer for \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\" returns successfully" Mar 2 13:21:05.696738 containerd[1459]: time="2026-03-02T13:21:05.696665409Z" level=info msg="shim disconnected" id=9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723 namespace=k8s.io Mar 2 13:21:05.697461 containerd[1459]: time="2026-03-02T13:21:05.697429569Z" level=warning msg="cleaning up after shim disconnected" id=9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723 namespace=k8s.io Mar 2 13:21:05.697552 containerd[1459]: time="2026-03-02T13:21:05.697531931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:21:05.905977 kubelet[1799]: E0302 13:21:05.905193 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:06.260648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723-rootfs.mount: Deactivated successfully. Mar 2 13:21:06.267418 kubelet[1799]: E0302 13:21:06.267121 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:06.290122 containerd[1459]: time="2026-03-02T13:21:06.288119189Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:21:06.375320 containerd[1459]: time="2026-03-02T13:21:06.375181314Z" level=info msg="CreateContainer within sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\"" Mar 2 13:21:06.376965 containerd[1459]: time="2026-03-02T13:21:06.376638985Z" level=info msg="StartContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\"" Mar 2 13:21:06.444062 containerd[1459]: time="2026-03-02T13:21:06.442402689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:21:06.446195 containerd[1459]: time="2026-03-02T13:21:06.445769732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 13:21:06.449134 containerd[1459]: time="2026-03-02T13:21:06.449028694Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:21:06.454670 containerd[1459]: time="2026-03-02T13:21:06.454564018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:21:06.457784 containerd[1459]: time="2026-03-02T13:21:06.457543009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 4.247032625s" Mar 2 13:21:06.457784 containerd[1459]: time="2026-03-02T13:21:06.457572818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 13:21:06.478215 systemd[1]: Started cri-containerd-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6.scope - libcontainer container 2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6. Mar 2 13:21:06.480442 containerd[1459]: time="2026-03-02T13:21:06.480310879Z" level=info msg="CreateContainer within sandbox \"26737f0cd6e352f4ff53e8359ea8c5cd1ee8ffb88db85b61d267f98182ffa8e2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:21:06.540264 containerd[1459]: time="2026-03-02T13:21:06.540086174Z" level=info msg="CreateContainer within sandbox \"26737f0cd6e352f4ff53e8359ea8c5cd1ee8ffb88db85b61d267f98182ffa8e2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68c3887e093328ca952d5a71efbc1a07ea3599a0aee1bd1894897441c924a8f6\"" Mar 2 13:21:06.541512 containerd[1459]: time="2026-03-02T13:21:06.541254811Z" level=info msg="StartContainer for \"68c3887e093328ca952d5a71efbc1a07ea3599a0aee1bd1894897441c924a8f6\"" Mar 2 13:21:06.572125 containerd[1459]: time="2026-03-02T13:21:06.572075661Z" level=info msg="StartContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" returns successfully" Mar 2 13:21:06.662419 systemd[1]: Started cri-containerd-68c3887e093328ca952d5a71efbc1a07ea3599a0aee1bd1894897441c924a8f6.scope - libcontainer container 68c3887e093328ca952d5a71efbc1a07ea3599a0aee1bd1894897441c924a8f6. Mar 2 13:21:06.777335 containerd[1459]: time="2026-03-02T13:21:06.777272233Z" level=info msg="StartContainer for \"68c3887e093328ca952d5a71efbc1a07ea3599a0aee1bd1894897441c924a8f6\" returns successfully" Mar 2 13:21:06.905322 kubelet[1799]: I0302 13:21:06.905284 1799 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 13:21:06.907267 kubelet[1799]: E0302 13:21:06.906686 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:07.262558 systemd[1]: run-containerd-runc-k8s.io-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6-runc.O6kfY6.mount: Deactivated successfully. Mar 2 13:21:07.276459 kubelet[1799]: E0302 13:21:07.275459 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:07.282004 kubelet[1799]: E0302 13:21:07.281521 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:07.314557 kubelet[1799]: I0302 13:21:07.314359 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-mxdhf" podStartSLOduration=3.395725015 podStartE2EDuration="17.314339135s" podCreationTimestamp="2026-03-02 13:20:50 +0000 UTC" firstStartedPulling="2026-03-02 13:20:52.54574355 +0000 UTC m=+3.472327778" lastFinishedPulling="2026-03-02 13:21:06.46435767 +0000 UTC m=+17.390941898" observedRunningTime="2026-03-02 13:21:07.314262979 +0000 UTC m=+18.240847207" watchObservedRunningTime="2026-03-02 13:21:07.314339135 +0000 UTC m=+18.240923373" Mar 2 13:21:07.359302 kubelet[1799]: I0302 13:21:07.358920 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-8kns8" podStartSLOduration=3.622226732 podStartE2EDuration="17.358754669s" podCreationTimestamp="2026-03-02 13:20:50 +0000 UTC" firstStartedPulling="2026-03-02 13:20:52.536087513 +0000 UTC m=+3.462671751" lastFinishedPulling="2026-03-02 13:21:06.27261545 +0000 UTC m=+17.199199688" observedRunningTime="2026-03-02 13:21:07.348238826 +0000 UTC m=+18.274823074" watchObservedRunningTime="2026-03-02 13:21:07.358754669 +0000 UTC m=+18.285338937" Mar 2 13:21:07.908330 kubelet[1799]: E0302 13:21:07.908073 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:08.298164 kubelet[1799]: E0302 13:21:08.298033 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:08.300639 kubelet[1799]: E0302 13:21:08.300133 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:08.910230 kubelet[1799]: E0302 13:21:08.909514 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:09.303321 kubelet[1799]: E0302 13:21:09.302594 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:09.889157 kubelet[1799]: E0302 13:21:09.888202 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:09.910450 kubelet[1799]: E0302 13:21:09.910378 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:10.912343 kubelet[1799]: E0302 13:21:10.911604 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:11.802645 update_engine[1450]: I20260302 13:21:11.801997 1450 update_attempter.cc:509] Updating boot flags... Mar 2 13:21:11.911969 kubelet[1799]: E0302 13:21:11.911930 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:11.939503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2504) Mar 2 13:21:12.016953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2502) Mar 2 13:21:12.120140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2502) Mar 2 13:21:12.694125 systemd[1]: Created slice kubepods-besteffort-podbb8a2d95_096c_467a_8f33_5bee2a3db108.slice - libcontainer container kubepods-besteffort-podbb8a2d95_096c_467a_8f33_5bee2a3db108.slice. Mar 2 13:21:12.803008 kubelet[1799]: I0302 13:21:12.801670 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtsm4\" (UniqueName: \"kubernetes.io/projected/bb8a2d95-096c-467a-8f33-5bee2a3db108-kube-api-access-rtsm4\") pod \"nginx-deployment-6cc69d4fc7-nkzwj\" (UID: \"bb8a2d95-096c-467a-8f33-5bee2a3db108\") " pod="default/nginx-deployment-6cc69d4fc7-nkzwj" Mar 2 13:21:12.916718 kubelet[1799]: E0302 13:21:12.914017 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:13.008719 containerd[1459]: time="2026-03-02T13:21:13.008396451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6cc69d4fc7-nkzwj,Uid:bb8a2d95-096c-467a-8f33-5bee2a3db108,Namespace:default,Attempt:0,}" Mar 2 13:21:13.916010 kubelet[1799]: E0302 13:21:13.915607 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:14.917231 kubelet[1799]: E0302 13:21:14.916706 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:15.935676 kubelet[1799]: E0302 13:21:15.933728 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:16.955365 kubelet[1799]: E0302 13:21:16.946021 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:17.981352 kubelet[1799]: E0302 13:21:17.980448 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:18.991655 kubelet[1799]: E0302 13:21:18.982269 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:19.984242 kubelet[1799]: E0302 13:21:19.984046 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:20.997014 kubelet[1799]: E0302 13:21:20.995016 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:22.000128 kubelet[1799]: E0302 13:21:21.996752 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:22.999985 kubelet[1799]: E0302 13:21:22.999613 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:24.000645 kubelet[1799]: E0302 13:21:24.000060 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:25.000568 kubelet[1799]: E0302 13:21:25.000390 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:26.001623 kubelet[1799]: E0302 13:21:26.001242 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:27.012050 kubelet[1799]: E0302 13:21:27.008385 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:28.010715 kubelet[1799]: E0302 13:21:28.008970 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:29.011217 kubelet[1799]: E0302 13:21:29.010722 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:29.887953 kubelet[1799]: E0302 13:21:29.887615 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:30.012867 kubelet[1799]: E0302 13:21:30.012581 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:31.013129 kubelet[1799]: E0302 13:21:31.012977 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:32.013937 kubelet[1799]: E0302 13:21:32.013380 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:33.016548 kubelet[1799]: E0302 13:21:33.016467 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:34.017275 kubelet[1799]: E0302 13:21:34.017008 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:35.020249 kubelet[1799]: E0302 13:21:35.018115 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:36.019315 kubelet[1799]: E0302 13:21:36.018414 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:36.660788 kubelet[1799]: E0302 13:21:36.660628 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:37.019571 kubelet[1799]: E0302 13:21:37.019220 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:38.022340 kubelet[1799]: E0302 13:21:38.019632 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:39.022497 kubelet[1799]: E0302 13:21:39.022179 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:40.025637 kubelet[1799]: E0302 13:21:40.022599 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:41.023978 kubelet[1799]: E0302 13:21:41.023658 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:42.027716 kubelet[1799]: E0302 13:21:42.025163 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:42.694944 kernel: Initializing XFRM netlink socket Mar 2 13:21:43.035626 kubelet[1799]: E0302 13:21:43.035320 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:44.043121 kubelet[1799]: E0302 13:21:44.042674 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:45.045085 kubelet[1799]: E0302 13:21:45.044458 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:45.300626 systemd-networkd[1386]: cilium_host: Link UP Mar 2 13:21:45.301139 systemd-networkd[1386]: cilium_net: Link UP Mar 2 13:21:45.301145 systemd-networkd[1386]: cilium_net: Gained carrier Mar 2 13:21:45.301504 systemd-networkd[1386]: cilium_host: Gained carrier Mar 2 13:21:45.304005 systemd-networkd[1386]: cilium_host: Gained IPv6LL Mar 2 13:21:45.459492 systemd-networkd[1386]: cilium_net: Gained IPv6LL Mar 2 13:21:45.718572 systemd-networkd[1386]: cilium_vxlan: Link UP Mar 2 13:21:45.718952 systemd-networkd[1386]: cilium_vxlan: Gained carrier Mar 2 13:21:46.059748 kubelet[1799]: E0302 13:21:46.057670 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:47.061673 kubelet[1799]: E0302 13:21:47.059058 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:47.110082 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Mar 2 13:21:47.471923 kernel: NET: Registered PF_ALG protocol family Mar 2 13:21:48.061614 kubelet[1799]: E0302 13:21:48.061527 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:49.063287 kubelet[1799]: E0302 13:21:49.063235 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:49.586449 systemd-networkd[1386]: lxc_health: Link UP Mar 2 13:21:49.611538 systemd-networkd[1386]: lxc_health: Gained carrier Mar 2 13:21:49.888271 kubelet[1799]: E0302 13:21:49.888175 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:50.065169 kubelet[1799]: E0302 13:21:50.065090 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:50.542960 containerd[1459]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Mar 2 13:21:50.544444 systemd[1]: run-netns-cni\x2d53cf985a\x2d60ac\x2d6d76\x2dd30a\x2d7f8d20e2f2a4.mount: Deactivated successfully. Mar 2 13:21:50.548918 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d-shm.mount: Deactivated successfully. Mar 2 13:21:50.553512 containerd[1459]: time="2026-03-02T13:21:50.553189560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6cc69d4fc7-nkzwj,Uid:bb8a2d95-096c-467a-8f33-5bee2a3db108,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Mar 2 13:21:50.557116 kubelet[1799]: E0302 13:21:50.555952 1799 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 2 13:21:50.557116 kubelet[1799]: rpc error: code = Unknown desc = failed to setup network for sandbox "88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:21:50.557116 kubelet[1799]: Is the agent running? Mar 2 13:21:50.557116 kubelet[1799]: > Mar 2 13:21:50.557116 kubelet[1799]: E0302 13:21:50.556081 1799 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err=< Mar 2 13:21:50.557116 kubelet[1799]: rpc error: code = Unknown desc = failed to setup network for sandbox "88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:21:50.557116 kubelet[1799]: Is the agent running? Mar 2 13:21:50.557116 kubelet[1799]: > pod="default/nginx-deployment-6cc69d4fc7-nkzwj" Mar 2 13:21:50.557116 kubelet[1799]: E0302 13:21:50.556107 1799 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err=< Mar 2 13:21:50.557116 kubelet[1799]: rpc error: code = Unknown desc = failed to setup network for sandbox "88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Mar 2 13:21:50.557116 kubelet[1799]: Is the agent running? Mar 2 13:21:50.557116 kubelet[1799]: > pod="default/nginx-deployment-6cc69d4fc7-nkzwj" Mar 2 13:21:50.557562 kubelet[1799]: E0302 13:21:50.556300 1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6cc69d4fc7-nkzwj_default(bb8a2d95-096c-467a-8f33-5bee2a3db108)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6cc69d4fc7-nkzwj_default(bb8a2d95-096c-467a-8f33-5bee2a3db108)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88417ac300e287eef333a8ece4c4df56949bf0703ddc438110ac16eadffc200d\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="default/nginx-deployment-6cc69d4fc7-nkzwj" podUID="bb8a2d95-096c-467a-8f33-5bee2a3db108" Mar 2 13:21:50.687685 kubelet[1799]: E0302 13:21:50.685369 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:50.747199 kubelet[1799]: E0302 13:21:50.747136 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:21:51.066243 kubelet[1799]: E0302 13:21:51.066010 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:51.204641 systemd-networkd[1386]: lxc_health: Gained IPv6LL Mar 2 13:21:52.068441 kubelet[1799]: E0302 13:21:52.067057 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:53.068532 kubelet[1799]: E0302 13:21:53.068301 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:54.070104 kubelet[1799]: E0302 13:21:54.069756 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:55.071493 kubelet[1799]: E0302 13:21:55.071303 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:56.072538 kubelet[1799]: E0302 13:21:56.072293 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:57.073993 kubelet[1799]: E0302 13:21:57.073713 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:58.075330 kubelet[1799]: E0302 13:21:58.075016 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:21:59.076531 kubelet[1799]: E0302 13:21:59.076378 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:00.076930 kubelet[1799]: E0302 13:22:00.076579 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:01.078171 kubelet[1799]: E0302 13:22:01.077968 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:02.079207 kubelet[1799]: E0302 13:22:02.078680 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:02.151625 containerd[1459]: time="2026-03-02T13:22:02.151507608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6cc69d4fc7-nkzwj,Uid:bb8a2d95-096c-467a-8f33-5bee2a3db108,Namespace:default,Attempt:0,}" Mar 2 13:22:02.657429 systemd-networkd[1386]: lxcf3d8a4bc50b6: Link UP Mar 2 13:22:02.668627 kernel: eth0: renamed from tmpd6ada Mar 2 13:22:02.680017 systemd-networkd[1386]: lxcf3d8a4bc50b6: Gained carrier Mar 2 13:22:03.080768 kubelet[1799]: E0302 13:22:03.080413 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:03.395901 containerd[1459]: time="2026-03-02T13:22:03.395447332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:22:03.395901 containerd[1459]: time="2026-03-02T13:22:03.395722417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:22:03.396509 containerd[1459]: time="2026-03-02T13:22:03.396015744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:03.399369 containerd[1459]: time="2026-03-02T13:22:03.399045100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:03.478297 systemd[1]: Started cri-containerd-d6ada95b42f9549cf4fa4208071d77db9f7a27ceec13f6feec3eb61c933e60c4.scope - libcontainer container d6ada95b42f9549cf4fa4208071d77db9f7a27ceec13f6feec3eb61c933e60c4. Mar 2 13:22:03.507548 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:22:03.602228 containerd[1459]: time="2026-03-02T13:22:03.602065359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6cc69d4fc7-nkzwj,Uid:bb8a2d95-096c-467a-8f33-5bee2a3db108,Namespace:default,Attempt:0,} returns sandbox id \"d6ada95b42f9549cf4fa4208071d77db9f7a27ceec13f6feec3eb61c933e60c4\"" Mar 2 13:22:03.605136 containerd[1459]: time="2026-03-02T13:22:03.605095534Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 2 13:22:04.003510 systemd-networkd[1386]: lxcf3d8a4bc50b6: Gained IPv6LL Mar 2 13:22:04.081778 kubelet[1799]: E0302 13:22:04.081537 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:05.173455 kubelet[1799]: E0302 13:22:05.119350 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:06.149578 kubelet[1799]: E0302 13:22:06.149076 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:07.150378 kubelet[1799]: E0302 13:22:07.150119 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:08.150595 kubelet[1799]: E0302 13:22:08.150520 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:09.151423 kubelet[1799]: E0302 13:22:09.151195 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:09.601405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381380481.mount: Deactivated successfully. Mar 2 13:22:09.887914 kubelet[1799]: E0302 13:22:09.887880 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:10.152190 kubelet[1799]: E0302 13:22:10.151971 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:10.975521 containerd[1459]: time="2026-03-02T13:22:10.975337749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:10.976725 containerd[1459]: time="2026-03-02T13:22:10.976593207Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63907328" Mar 2 13:22:10.978483 containerd[1459]: time="2026-03-02T13:22:10.978297699Z" level=info msg="ImageCreate event name:\"sha256:1f1a56031783bd6c9b1c02e432c6eabf091ec9558780f69aadad131b0d641a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:10.982089 containerd[1459]: time="2026-03-02T13:22:10.981781957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:94d84a302e569aca6fb7eed139af2d59a3cba208311ad18b69a7d799472c2b22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:10.983952 containerd[1459]: time="2026-03-02T13:22:10.983722948Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:1f1a56031783bd6c9b1c02e432c6eabf091ec9558780f69aadad131b0d641a21\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:94d84a302e569aca6fb7eed139af2d59a3cba208311ad18b69a7d799472c2b22\", size \"63907206\" in 7.378575863s" Mar 2 13:22:10.983952 containerd[1459]: time="2026-03-02T13:22:10.983893528Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:1f1a56031783bd6c9b1c02e432c6eabf091ec9558780f69aadad131b0d641a21\"" Mar 2 13:22:10.991096 containerd[1459]: time="2026-03-02T13:22:10.990982908Z" level=info msg="CreateContainer within sandbox \"d6ada95b42f9549cf4fa4208071d77db9f7a27ceec13f6feec3eb61c933e60c4\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 2 13:22:11.014982 containerd[1459]: time="2026-03-02T13:22:11.014745534Z" level=info msg="CreateContainer within sandbox \"d6ada95b42f9549cf4fa4208071d77db9f7a27ceec13f6feec3eb61c933e60c4\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bb5a5087cd0faa6f1255815212b80f57fcd8d9e43f3e7a58adb4085f2f2dae4e\"" Mar 2 13:22:11.016496 containerd[1459]: time="2026-03-02T13:22:11.016151107Z" level=info msg="StartContainer for \"bb5a5087cd0faa6f1255815212b80f57fcd8d9e43f3e7a58adb4085f2f2dae4e\"" Mar 2 13:22:11.072210 systemd[1]: Started cri-containerd-bb5a5087cd0faa6f1255815212b80f57fcd8d9e43f3e7a58adb4085f2f2dae4e.scope - libcontainer container bb5a5087cd0faa6f1255815212b80f57fcd8d9e43f3e7a58adb4085f2f2dae4e. Mar 2 13:22:11.120599 containerd[1459]: time="2026-03-02T13:22:11.120492622Z" level=info msg="StartContainer for \"bb5a5087cd0faa6f1255815212b80f57fcd8d9e43f3e7a58adb4085f2f2dae4e\" returns successfully" Mar 2 13:22:11.153625 kubelet[1799]: E0302 13:22:11.153397 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:12.153717 kubelet[1799]: E0302 13:22:12.153605 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:13.155101 kubelet[1799]: E0302 13:22:13.154671 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:14.156162 kubelet[1799]: E0302 13:22:14.155924 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:15.157416 kubelet[1799]: E0302 13:22:15.157252 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:16.149286 kubelet[1799]: E0302 13:22:16.149168 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:16.157631 kubelet[1799]: E0302 13:22:16.157485 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:16.899434 kubelet[1799]: I0302 13:22:16.899294 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/nginx-deployment-6cc69d4fc7-nkzwj" podStartSLOduration=57.518526008 podStartE2EDuration="1m4.899271563s" podCreationTimestamp="2026-03-02 13:21:12 +0000 UTC" firstStartedPulling="2026-03-02 13:22:03.604440512 +0000 UTC m=+74.531024741" lastFinishedPulling="2026-03-02 13:22:10.985186068 +0000 UTC m=+81.911770296" observedRunningTime="2026-03-02 13:22:11.176468331 +0000 UTC m=+82.103052559" watchObservedRunningTime="2026-03-02 13:22:16.899271563 +0000 UTC m=+87.825855791" Mar 2 13:22:16.910058 kubelet[1799]: I0302 13:22:16.909993 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e30059cb-b7e3-4276-8e81-700d2f2e8cc2-data\") pod \"nfs-server-provisioner-0\" (UID: \"e30059cb-b7e3-4276-8e81-700d2f2e8cc2\") " pod="default/nfs-server-provisioner-0" Mar 2 13:22:16.910058 kubelet[1799]: I0302 13:22:16.910059 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxzv\" (UniqueName: \"kubernetes.io/projected/e30059cb-b7e3-4276-8e81-700d2f2e8cc2-kube-api-access-wrxzv\") pod \"nfs-server-provisioner-0\" (UID: \"e30059cb-b7e3-4276-8e81-700d2f2e8cc2\") " pod="default/nfs-server-provisioner-0" Mar 2 13:22:16.914381 systemd[1]: Created slice kubepods-besteffort-pode30059cb_b7e3_4276_8e81_700d2f2e8cc2.slice - libcontainer container kubepods-besteffort-pode30059cb_b7e3_4276_8e81_700d2f2e8cc2.slice. Mar 2 13:22:17.158313 kubelet[1799]: E0302 13:22:17.157919 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:17.223674 containerd[1459]: time="2026-03-02T13:22:17.223574297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e30059cb-b7e3-4276-8e81-700d2f2e8cc2,Namespace:default,Attempt:0,}" Mar 2 13:22:17.286711 systemd-networkd[1386]: lxc9a2442da2120: Link UP Mar 2 13:22:17.299013 kernel: eth0: renamed from tmp53e5d Mar 2 13:22:17.313168 systemd-networkd[1386]: lxc9a2442da2120: Gained carrier Mar 2 13:22:17.645213 containerd[1459]: time="2026-03-02T13:22:17.644999195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:22:17.646135 containerd[1459]: time="2026-03-02T13:22:17.645700776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:22:17.647736 containerd[1459]: time="2026-03-02T13:22:17.647595170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:17.648345 containerd[1459]: time="2026-03-02T13:22:17.647941462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:17.692361 systemd[1]: Started cri-containerd-53e5d985be753123b27e63f472f395e40beeea03bd70f2dacf0aa0425510fd8e.scope - libcontainer container 53e5d985be753123b27e63f472f395e40beeea03bd70f2dacf0aa0425510fd8e. Mar 2 13:22:17.724071 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:22:17.801384 containerd[1459]: time="2026-03-02T13:22:17.801145106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e30059cb-b7e3-4276-8e81-700d2f2e8cc2,Namespace:default,Attempt:0,} returns sandbox id \"53e5d985be753123b27e63f472f395e40beeea03bd70f2dacf0aa0425510fd8e\"" Mar 2 13:22:17.804582 containerd[1459]: time="2026-03-02T13:22:17.804539504Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 2 13:22:18.158584 kubelet[1799]: E0302 13:22:18.158351 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:18.403665 systemd-networkd[1386]: lxc9a2442da2120: Gained IPv6LL Mar 2 13:22:19.159782 kubelet[1799]: E0302 13:22:19.159582 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:20.160950 kubelet[1799]: E0302 13:22:20.160771 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:20.241049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352423038.mount: Deactivated successfully. Mar 2 13:22:21.161767 kubelet[1799]: E0302 13:22:21.161472 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:22.161919 kubelet[1799]: E0302 13:22:22.161618 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:22.508500 containerd[1459]: time="2026-03-02T13:22:22.508202209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:22.509908 containerd[1459]: time="2026-03-02T13:22:22.509867492Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 2 13:22:22.511653 containerd[1459]: time="2026-03-02T13:22:22.511504576Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:22.514664 containerd[1459]: time="2026-03-02T13:22:22.514589667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:22.517076 containerd[1459]: time="2026-03-02T13:22:22.517002502Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.712327599s" Mar 2 13:22:22.517076 containerd[1459]: time="2026-03-02T13:22:22.517066820Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 2 13:22:22.524494 containerd[1459]: time="2026-03-02T13:22:22.524304540Z" level=info msg="CreateContainer within sandbox \"53e5d985be753123b27e63f472f395e40beeea03bd70f2dacf0aa0425510fd8e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 2 13:22:22.544428 containerd[1459]: time="2026-03-02T13:22:22.544303596Z" level=info msg="CreateContainer within sandbox \"53e5d985be753123b27e63f472f395e40beeea03bd70f2dacf0aa0425510fd8e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a7633f9ac8c51f94fb20398d35a8db1976cdd22a2d36f9b7d1545a636c961cb3\"" Mar 2 13:22:22.545449 containerd[1459]: time="2026-03-02T13:22:22.545344617Z" level=info msg="StartContainer for \"a7633f9ac8c51f94fb20398d35a8db1976cdd22a2d36f9b7d1545a636c961cb3\"" Mar 2 13:22:22.646003 systemd[1]: Started cri-containerd-a7633f9ac8c51f94fb20398d35a8db1976cdd22a2d36f9b7d1545a636c961cb3.scope - libcontainer container a7633f9ac8c51f94fb20398d35a8db1976cdd22a2d36f9b7d1545a636c961cb3. Mar 2 13:22:22.727232 containerd[1459]: time="2026-03-02T13:22:22.726700570Z" level=info msg="StartContainer for \"a7633f9ac8c51f94fb20398d35a8db1976cdd22a2d36f9b7d1545a636c961cb3\" returns successfully" Mar 2 13:22:23.162541 kubelet[1799]: E0302 13:22:23.162361 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:24.164191 kubelet[1799]: E0302 13:22:24.164006 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:25.165327 kubelet[1799]: E0302 13:22:25.165142 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:26.165762 kubelet[1799]: E0302 13:22:26.165551 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:27.169934 kubelet[1799]: E0302 13:22:27.169587 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:28.174415 kubelet[1799]: E0302 13:22:28.174288 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:28.193256 kubelet[1799]: I0302 13:22:28.193029 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=7.478138239 podStartE2EDuration="12.192986866s" podCreationTimestamp="2026-03-02 13:22:16 +0000 UTC" firstStartedPulling="2026-03-02 13:22:17.803532285 +0000 UTC m=+88.730116514" lastFinishedPulling="2026-03-02 13:22:22.518380913 +0000 UTC m=+93.444965141" observedRunningTime="2026-03-02 13:22:23.213546934 +0000 UTC m=+94.140131173" watchObservedRunningTime="2026-03-02 13:22:28.192986866 +0000 UTC m=+99.119571113" Mar 2 13:22:28.385944 systemd[1]: Created slice kubepods-besteffort-pod9d1a7caf_45be_4888_a547_01846a3c8a01.slice - libcontainer container kubepods-besteffort-pod9d1a7caf_45be_4888_a547_01846a3c8a01.slice. Mar 2 13:22:28.459347 kubelet[1799]: I0302 13:22:28.454103 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72b4233c-4903-453d-a1b9-45c0f7bfb426\" (UniqueName: \"kubernetes.io/nfs/9d1a7caf-45be-4888-a547-01846a3c8a01-pvc-72b4233c-4903-453d-a1b9-45c0f7bfb426\") pod \"test-pod-1\" (UID: \"9d1a7caf-45be-4888-a547-01846a3c8a01\") " pod="default/test-pod-1" Mar 2 13:22:28.466422 kubelet[1799]: I0302 13:22:28.466178 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l7v\" (UniqueName: \"kubernetes.io/projected/9d1a7caf-45be-4888-a547-01846a3c8a01-kube-api-access-d5l7v\") pod \"test-pod-1\" (UID: \"9d1a7caf-45be-4888-a547-01846a3c8a01\") " pod="default/test-pod-1" Mar 2 13:22:28.866917 kernel: FS-Cache: Loaded Mar 2 13:22:29.180397 kubelet[1799]: E0302 13:22:29.179490 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:29.191591 kernel: RPC: Registered named UNIX socket transport module. Mar 2 13:22:29.192614 kernel: RPC: Registered udp transport module. Mar 2 13:22:29.192672 kernel: RPC: Registered tcp transport module. Mar 2 13:22:29.198415 kernel: RPC: Registered tcp-with-tls transport module. Mar 2 13:22:29.200460 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 2 13:22:29.690313 kernel: NFS: Registering the id_resolver key type Mar 2 13:22:29.690576 kernel: Key type id_resolver registered Mar 2 13:22:29.690614 kernel: Key type id_legacy registered Mar 2 13:22:29.774977 nfsidmap[3264]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 2 13:22:29.791756 nfsidmap[3267]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 2 13:22:29.887345 kubelet[1799]: E0302 13:22:29.887249 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:29.961438 containerd[1459]: time="2026-03-02T13:22:29.960258019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9d1a7caf-45be-4888-a547-01846a3c8a01,Namespace:default,Attempt:0,}" Mar 2 13:22:30.068449 systemd-networkd[1386]: lxc39aa8277770a: Link UP Mar 2 13:22:30.086265 kernel: eth0: renamed from tmp5a17d Mar 2 13:22:30.098323 systemd-networkd[1386]: lxc39aa8277770a: Gained carrier Mar 2 13:22:30.184407 kubelet[1799]: E0302 13:22:30.184361 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:30.580051 containerd[1459]: time="2026-03-02T13:22:30.577465533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:22:30.580051 containerd[1459]: time="2026-03-02T13:22:30.577644342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:22:30.580051 containerd[1459]: time="2026-03-02T13:22:30.577730280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:30.580051 containerd[1459]: time="2026-03-02T13:22:30.578575187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:30.651466 systemd[1]: Started cri-containerd-5a17d432d48edad5dcaad9bfd86d8ab578b657a56c89bfcb446bb2bff2305e22.scope - libcontainer container 5a17d432d48edad5dcaad9bfd86d8ab578b657a56c89bfcb446bb2bff2305e22. Mar 2 13:22:30.680545 systemd-resolved[1388]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:22:30.755255 containerd[1459]: time="2026-03-02T13:22:30.755096546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9d1a7caf-45be-4888-a547-01846a3c8a01,Namespace:default,Attempt:0,} returns sandbox id \"5a17d432d48edad5dcaad9bfd86d8ab578b657a56c89bfcb446bb2bff2305e22\"" Mar 2 13:22:30.764041 containerd[1459]: time="2026-03-02T13:22:30.763761828Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 2 13:22:30.901572 containerd[1459]: time="2026-03-02T13:22:30.901269810Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:30.903462 containerd[1459]: time="2026-03-02T13:22:30.903278897Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 2 13:22:30.918330 containerd[1459]: time="2026-03-02T13:22:30.918151913Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:1f1a56031783bd6c9b1c02e432c6eabf091ec9558780f69aadad131b0d641a21\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:94d84a302e569aca6fb7eed139af2d59a3cba208311ad18b69a7d799472c2b22\", size \"63907206\" in 154.103426ms" Mar 2 13:22:30.918330 containerd[1459]: time="2026-03-02T13:22:30.918237791Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:1f1a56031783bd6c9b1c02e432c6eabf091ec9558780f69aadad131b0d641a21\"" Mar 2 13:22:30.941312 containerd[1459]: time="2026-03-02T13:22:30.941166851Z" level=info msg="CreateContainer within sandbox \"5a17d432d48edad5dcaad9bfd86d8ab578b657a56c89bfcb446bb2bff2305e22\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 2 13:22:30.989531 containerd[1459]: time="2026-03-02T13:22:30.989350738Z" level=info msg="CreateContainer within sandbox \"5a17d432d48edad5dcaad9bfd86d8ab578b657a56c89bfcb446bb2bff2305e22\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ad2b92431d967fd2bf87e56467ebc7cd654ab8e78f12f7d69cf7bd69c13b0e9c\"" Mar 2 13:22:30.992058 containerd[1459]: time="2026-03-02T13:22:30.991984574Z" level=info msg="StartContainer for \"ad2b92431d967fd2bf87e56467ebc7cd654ab8e78f12f7d69cf7bd69c13b0e9c\"" Mar 2 13:22:31.171766 systemd[1]: Started cri-containerd-ad2b92431d967fd2bf87e56467ebc7cd654ab8e78f12f7d69cf7bd69c13b0e9c.scope - libcontainer container ad2b92431d967fd2bf87e56467ebc7cd654ab8e78f12f7d69cf7bd69c13b0e9c. Mar 2 13:22:31.186547 kubelet[1799]: E0302 13:22:31.186446 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:31.205289 systemd-networkd[1386]: lxc39aa8277770a: Gained IPv6LL Mar 2 13:22:31.288970 containerd[1459]: time="2026-03-02T13:22:31.288722679Z" level=info msg="StartContainer for \"ad2b92431d967fd2bf87e56467ebc7cd654ab8e78f12f7d69cf7bd69c13b0e9c\" returns successfully" Mar 2 13:22:32.187420 kubelet[1799]: E0302 13:22:32.187097 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:32.319149 kubelet[1799]: I0302 13:22:32.318291 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.160156511 podStartE2EDuration="15.318274056s" podCreationTimestamp="2026-03-02 13:22:17 +0000 UTC" firstStartedPulling="2026-03-02 13:22:30.762109333 +0000 UTC m=+101.688693561" lastFinishedPulling="2026-03-02 13:22:30.920226878 +0000 UTC m=+101.846811106" observedRunningTime="2026-03-02 13:22:32.318100667 +0000 UTC m=+103.244684916" watchObservedRunningTime="2026-03-02 13:22:32.318274056 +0000 UTC m=+103.244858304" Mar 2 13:22:33.188227 kubelet[1799]: E0302 13:22:33.188006 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:34.189344 kubelet[1799]: E0302 13:22:34.189131 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:34.824727 systemd[1]: run-containerd-runc-k8s.io-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6-runc.w6WMd7.mount: Deactivated successfully. Mar 2 13:22:34.847776 containerd[1459]: time="2026-03-02T13:22:34.847723398Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:22:34.861290 containerd[1459]: time="2026-03-02T13:22:34.861196946Z" level=info msg="StopContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" with timeout 2 (s)" Mar 2 13:22:34.861880 containerd[1459]: time="2026-03-02T13:22:34.861721325Z" level=info msg="Stop container \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" with signal terminated" Mar 2 13:22:34.879400 systemd-networkd[1386]: lxc_health: Link DOWN Mar 2 13:22:34.879410 systemd-networkd[1386]: lxc_health: Lost carrier Mar 2 13:22:34.914251 systemd[1]: cri-containerd-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6.scope: Deactivated successfully. Mar 2 13:22:34.914907 systemd[1]: cri-containerd-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6.scope: Consumed 18.112s CPU time. Mar 2 13:22:34.963179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6-rootfs.mount: Deactivated successfully. Mar 2 13:22:34.977537 containerd[1459]: time="2026-03-02T13:22:34.977265482Z" level=info msg="shim disconnected" id=2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6 namespace=k8s.io Mar 2 13:22:34.977537 containerd[1459]: time="2026-03-02T13:22:34.977380054Z" level=warning msg="cleaning up after shim disconnected" id=2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6 namespace=k8s.io Mar 2 13:22:34.977537 containerd[1459]: time="2026-03-02T13:22:34.977393248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:35.011344 containerd[1459]: time="2026-03-02T13:22:35.011159265Z" level=info msg="StopContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" returns successfully" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012485189Z" level=info msg="StopPodSandbox for \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\"" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012571449Z" level=info msg="Container to stop \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012591766Z" level=info msg="Container to stop \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012609740Z" level=info msg="Container to stop \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012624517Z" level=info msg="Container to stop \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:22:35.012742 containerd[1459]: time="2026-03-02T13:22:35.012704725Z" level=info msg="Container to stop \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:22:35.016319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5-shm.mount: Deactivated successfully. Mar 2 13:22:35.027763 systemd[1]: cri-containerd-a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5.scope: Deactivated successfully. Mar 2 13:22:35.068758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5-rootfs.mount: Deactivated successfully. Mar 2 13:22:35.078325 containerd[1459]: time="2026-03-02T13:22:35.077784079Z" level=info msg="shim disconnected" id=a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5 namespace=k8s.io Mar 2 13:22:35.080120 containerd[1459]: time="2026-03-02T13:22:35.079923160Z" level=warning msg="cleaning up after shim disconnected" id=a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5 namespace=k8s.io Mar 2 13:22:35.080120 containerd[1459]: time="2026-03-02T13:22:35.079989743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:35.115004 containerd[1459]: time="2026-03-02T13:22:35.114917552Z" level=info msg="TearDown network for sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" successfully" Mar 2 13:22:35.115004 containerd[1459]: time="2026-03-02T13:22:35.114983705Z" level=info msg="StopPodSandbox for \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" returns successfully" Mar 2 13:22:35.190511 kubelet[1799]: E0302 13:22:35.190326 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:35.214612 kubelet[1799]: E0302 13:22:35.214378 1799 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:22:35.279155 kubelet[1799]: I0302 13:22:35.278967 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-etc-cni-netd\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279155 kubelet[1799]: I0302 13:22:35.279074 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-hostproc\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-hostproc\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279155 kubelet[1799]: I0302 13:22:35.279093 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-etc-cni-netd" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.279155 kubelet[1799]: I0302 13:22:35.279134 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-lib-modules" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.279155 kubelet[1799]: I0302 13:22:35.279103 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-lib-modules\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-lib-modules\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279538 kubelet[1799]: I0302 13:22:35.279163 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-hostproc" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.279538 kubelet[1799]: I0302 13:22:35.279175 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-bpf-maps\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279538 kubelet[1799]: I0302 13:22:35.279202 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-cgroup\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279538 kubelet[1799]: I0302 13:22:35.279232 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-config-path\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279538 kubelet[1799]: I0302 13:22:35.279262 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/6030bdb5-024c-411d-863f-2a21e280ca68-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6030bdb5-024c-411d-863f-2a21e280ca68-clustermesh-secrets\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279922 kubelet[1799]: I0302 13:22:35.279284 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-net\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279922 kubelet[1799]: I0302 13:22:35.279306 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-kernel\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279922 kubelet[1799]: I0302 13:22:35.279334 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-xtables-lock\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.279922 kubelet[1799]: I0302 13:22:35.279368 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-hubble-tls\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-hubble-tls\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.281747 kubelet[1799]: I0302 13:22:35.281721 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-kernel" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283180 kubelet[1799]: I0302 13:22:35.282156 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-bpf-maps" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283180 kubelet[1799]: I0302 13:22:35.282181 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-net" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283180 kubelet[1799]: I0302 13:22:35.282206 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-xtables-lock" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283180 kubelet[1799]: I0302 13:22:35.282224 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-cgroup" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283180 kubelet[1799]: I0302 13:22:35.282273 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-run" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282255 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-run\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-run\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282523 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cni-path\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cni-path\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282555 1799 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-kube-api-access-xjzj6\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-kube-api-access-xjzj6\") pod \"6030bdb5-024c-411d-863f-2a21e280ca68\" (UID: \"6030bdb5-024c-411d-863f-2a21e280ca68\") " Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282603 1799 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-xtables-lock\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282618 1799 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-run\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282630 1799 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-etc-cni-netd\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283381 kubelet[1799]: I0302 13:22:35.282719 1799 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-hostproc\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283754 kubelet[1799]: I0302 13:22:35.282732 1799 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-lib-modules\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283754 kubelet[1799]: I0302 13:22:35.282743 1799 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-bpf-maps\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283754 kubelet[1799]: I0302 13:22:35.282757 1799 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-cgroup\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283754 kubelet[1799]: I0302 13:22:35.282769 1799 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-net\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.283754 kubelet[1799]: I0302 13:22:35.282785 1799 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-host-proc-sys-kernel\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.284319 kubelet[1799]: I0302 13:22:35.284243 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cni-path" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:22:35.284621 kubelet[1799]: I0302 13:22:35.284376 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-config-path" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:22:35.286908 kubelet[1799]: I0302 13:22:35.286739 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-hubble-tls" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:22:35.286908 kubelet[1799]: I0302 13:22:35.286743 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6030bdb5-024c-411d-863f-2a21e280ca68-clustermesh-secrets" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:22:35.290398 kubelet[1799]: I0302 13:22:35.290270 1799 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-kube-api-access-xjzj6" pod "6030bdb5-024c-411d-863f-2a21e280ca68" (UID: "6030bdb5-024c-411d-863f-2a21e280ca68"). InnerVolumeSpecName "kube-api-access-xjzj6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:22:35.312486 kubelet[1799]: I0302 13:22:35.312384 1799 scope.go:122] "RemoveContainer" containerID="2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6" Mar 2 13:22:35.315226 containerd[1459]: time="2026-03-02T13:22:35.314733459Z" level=info msg="RemoveContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\"" Mar 2 13:22:35.321198 containerd[1459]: time="2026-03-02T13:22:35.321084157Z" level=info msg="RemoveContainer for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" returns successfully" Mar 2 13:22:35.321324 systemd[1]: Removed slice kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice - libcontainer container kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice. Mar 2 13:22:35.322057 kubelet[1799]: I0302 13:22:35.321739 1799 scope.go:122] "RemoveContainer" containerID="9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723" Mar 2 13:22:35.322171 systemd[1]: kubepods-burstable-pod6030bdb5_024c_411d_863f_2a21e280ca68.slice: Consumed 18.364s CPU time. Mar 2 13:22:35.324521 containerd[1459]: time="2026-03-02T13:22:35.324147668Z" level=info msg="RemoveContainer for \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\"" Mar 2 13:22:35.329405 containerd[1459]: time="2026-03-02T13:22:35.329113863Z" level=info msg="RemoveContainer for \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\" returns successfully" Mar 2 13:22:35.329474 kubelet[1799]: I0302 13:22:35.329459 1799 scope.go:122] "RemoveContainer" containerID="716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1" Mar 2 13:22:35.331368 containerd[1459]: time="2026-03-02T13:22:35.331149543Z" level=info msg="RemoveContainer for \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\"" Mar 2 13:22:35.337935 containerd[1459]: time="2026-03-02T13:22:35.337596769Z" level=info msg="RemoveContainer for \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\" returns successfully" Mar 2 13:22:35.338938 kubelet[1799]: I0302 13:22:35.338590 1799 scope.go:122] "RemoveContainer" containerID="6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1" Mar 2 13:22:35.340702 containerd[1459]: time="2026-03-02T13:22:35.340593979Z" level=info msg="RemoveContainer for \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\"" Mar 2 13:22:35.346418 containerd[1459]: time="2026-03-02T13:22:35.346229451Z" level=info msg="RemoveContainer for \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\" returns successfully" Mar 2 13:22:35.346587 kubelet[1799]: I0302 13:22:35.346559 1799 scope.go:122] "RemoveContainer" containerID="db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163" Mar 2 13:22:35.349198 containerd[1459]: time="2026-03-02T13:22:35.349102374Z" level=info msg="RemoveContainer for \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\"" Mar 2 13:22:35.356224 containerd[1459]: time="2026-03-02T13:22:35.356130983Z" level=info msg="RemoveContainer for \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\" returns successfully" Mar 2 13:22:35.356732 kubelet[1799]: I0302 13:22:35.356588 1799 scope.go:122] "RemoveContainer" containerID="2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6" Mar 2 13:22:35.357444 containerd[1459]: time="2026-03-02T13:22:35.357189195Z" level=error msg="ContainerStatus for \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\": not found" Mar 2 13:22:35.357632 kubelet[1799]: E0302 13:22:35.357608 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\": not found" containerID="2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6" Mar 2 13:22:35.357990 kubelet[1799]: I0302 13:22:35.357714 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6"} err="failed to get container status \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bec111d4552c820e451987606c0110a7d8fc70d3bb027c1adebf3f1969591f6\": not found" Mar 2 13:22:35.357990 kubelet[1799]: I0302 13:22:35.357906 1799 scope.go:122] "RemoveContainer" containerID="9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723" Mar 2 13:22:35.358349 containerd[1459]: time="2026-03-02T13:22:35.358274930Z" level=error msg="ContainerStatus for \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\": not found" Mar 2 13:22:35.358520 kubelet[1799]: E0302 13:22:35.358445 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\": not found" containerID="9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723" Mar 2 13:22:35.358575 kubelet[1799]: I0302 13:22:35.358516 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723"} err="failed to get container status \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fd2349742b8325e606b73a53f662455a5589dfc662dc89ad4fced5de49e9723\": not found" Mar 2 13:22:35.358575 kubelet[1799]: I0302 13:22:35.358534 1799 scope.go:122] "RemoveContainer" containerID="716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1" Mar 2 13:22:35.359624 containerd[1459]: time="2026-03-02T13:22:35.359548265Z" level=error msg="ContainerStatus for \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\": not found" Mar 2 13:22:35.359991 kubelet[1799]: E0302 13:22:35.359947 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\": not found" containerID="716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1" Mar 2 13:22:35.360026 kubelet[1799]: I0302 13:22:35.359994 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1"} err="failed to get container status \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"716d9ebbabdd275f31ae090d9f2b8d41a446f10ab7800ab8818b74dbea856ed1\": not found" Mar 2 13:22:35.360059 kubelet[1799]: I0302 13:22:35.360027 1799 scope.go:122] "RemoveContainer" containerID="6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1" Mar 2 13:22:35.360525 containerd[1459]: time="2026-03-02T13:22:35.360293763Z" level=error msg="ContainerStatus for \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\": not found" Mar 2 13:22:35.360574 kubelet[1799]: E0302 13:22:35.360520 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\": not found" containerID="6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1" Mar 2 13:22:35.360574 kubelet[1799]: I0302 13:22:35.360546 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1"} err="failed to get container status \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a614089cefb2d36ac153fa463bbd760db401ca97ef306ff1e8c529f19cc94e1\": not found" Mar 2 13:22:35.360574 kubelet[1799]: I0302 13:22:35.360567 1799 scope.go:122] "RemoveContainer" containerID="db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163" Mar 2 13:22:35.361146 containerd[1459]: time="2026-03-02T13:22:35.361109720Z" level=error msg="ContainerStatus for \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\": not found" Mar 2 13:22:35.361220 kubelet[1799]: E0302 13:22:35.361204 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\": not found" containerID="db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163" Mar 2 13:22:35.361254 kubelet[1799]: I0302 13:22:35.361226 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163"} err="failed to get container status \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\": rpc error: code = NotFound desc = an error occurred when try to find container \"db6ffbe8c661dd7e342ba34b5aa9522ff9da87b1ebfb5cbf2a40462ffef1e163\": not found" Mar 2 13:22:35.384290 kubelet[1799]: I0302 13:22:35.384063 1799 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6030bdb5-024c-411d-863f-2a21e280ca68-cilium-config-path\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.384290 kubelet[1799]: I0302 13:22:35.384200 1799 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6030bdb5-024c-411d-863f-2a21e280ca68-clustermesh-secrets\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.384290 kubelet[1799]: I0302 13:22:35.384214 1799 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-hubble-tls\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.384290 kubelet[1799]: I0302 13:22:35.384225 1799 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6030bdb5-024c-411d-863f-2a21e280ca68-cni-path\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.384290 kubelet[1799]: I0302 13:22:35.384237 1799 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjzj6\" (UniqueName: \"kubernetes.io/projected/6030bdb5-024c-411d-863f-2a21e280ca68-kube-api-access-xjzj6\") on node \"10.0.0.122\" DevicePath \"\"" Mar 2 13:22:35.819043 systemd[1]: var-lib-kubelet-pods-6030bdb5\x2d024c\x2d411d\x2d863f\x2d2a21e280ca68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjzj6.mount: Deactivated successfully. Mar 2 13:22:35.819205 systemd[1]: var-lib-kubelet-pods-6030bdb5\x2d024c\x2d411d\x2d863f\x2d2a21e280ca68-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:22:35.819278 systemd[1]: var-lib-kubelet-pods-6030bdb5\x2d024c\x2d411d\x2d863f\x2d2a21e280ca68-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:22:36.148020 kubelet[1799]: I0302 13:22:36.147776 1799 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6030bdb5-024c-411d-863f-2a21e280ca68" path="/var/lib/kubelet/pods/6030bdb5-024c-411d-863f-2a21e280ca68/volumes" Mar 2 13:22:36.191323 kubelet[1799]: E0302 13:22:36.191170 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:37.191696 kubelet[1799]: E0302 13:22:37.191447 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:37.662518 systemd[1]: Created slice kubepods-besteffort-podebafba4a_5e57_4473_8a47_241e5c8d3f69.slice - libcontainer container kubepods-besteffort-podebafba4a_5e57_4473_8a47_241e5c8d3f69.slice. Mar 2 13:22:37.671434 systemd[1]: Created slice kubepods-burstable-pod75b03dda_5908_49d2_972a_0878fb0384b7.slice - libcontainer container kubepods-burstable-pod75b03dda_5908_49d2_972a_0878fb0384b7.slice. Mar 2 13:22:37.803241 kubelet[1799]: I0302 13:22:37.802956 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75b03dda-5908-49d2-972a-0878fb0384b7-clustermesh-secrets\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803241 kubelet[1799]: I0302 13:22:37.803063 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75b03dda-5908-49d2-972a-0878fb0384b7-cilium-ipsec-secrets\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803241 kubelet[1799]: I0302 13:22:37.803095 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-host-proc-sys-net\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803241 kubelet[1799]: I0302 13:22:37.803122 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebafba4a-5e57-4473-8a47-241e5c8d3f69-cilium-config-path\") pod \"cilium-operator-78cf5644cb-tg9np\" (UID: \"ebafba4a-5e57-4473-8a47-241e5c8d3f69\") " pod="kube-system/cilium-operator-78cf5644cb-tg9np" Mar 2 13:22:37.803241 kubelet[1799]: I0302 13:22:37.803151 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-etc-cni-netd\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803482 kubelet[1799]: I0302 13:22:37.803182 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7gxs\" (UniqueName: \"kubernetes.io/projected/ebafba4a-5e57-4473-8a47-241e5c8d3f69-kube-api-access-z7gxs\") pod \"cilium-operator-78cf5644cb-tg9np\" (UID: \"ebafba4a-5e57-4473-8a47-241e5c8d3f69\") " pod="kube-system/cilium-operator-78cf5644cb-tg9np" Mar 2 13:22:37.803482 kubelet[1799]: I0302 13:22:37.803204 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-cilium-run\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803482 kubelet[1799]: I0302 13:22:37.803224 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-xtables-lock\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803482 kubelet[1799]: I0302 13:22:37.803248 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-host-proc-sys-kernel\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803482 kubelet[1799]: I0302 13:22:37.803345 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd674\" (UniqueName: \"kubernetes.io/projected/75b03dda-5908-49d2-972a-0878fb0384b7-kube-api-access-sd674\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803702 kubelet[1799]: I0302 13:22:37.803391 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-bpf-maps\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803702 kubelet[1799]: I0302 13:22:37.803419 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-hostproc\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803702 kubelet[1799]: I0302 13:22:37.803554 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-lib-modules\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803702 kubelet[1799]: I0302 13:22:37.803600 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b03dda-5908-49d2-972a-0878fb0384b7-cilium-config-path\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803702 kubelet[1799]: I0302 13:22:37.803618 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75b03dda-5908-49d2-972a-0878fb0384b7-hubble-tls\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803934 kubelet[1799]: I0302 13:22:37.803733 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-cilium-cgroup\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.803934 kubelet[1799]: I0302 13:22:37.803766 1799 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75b03dda-5908-49d2-972a-0878fb0384b7-cni-path\") pod \"cilium-t9q9s\" (UID: \"75b03dda-5908-49d2-972a-0878fb0384b7\") " pod="kube-system/cilium-t9q9s" Mar 2 13:22:37.972678 kubelet[1799]: E0302 13:22:37.972383 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:37.973762 containerd[1459]: time="2026-03-02T13:22:37.973462762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-tg9np,Uid:ebafba4a-5e57-4473-8a47-241e5c8d3f69,Namespace:kube-system,Attempt:0,}" Mar 2 13:22:37.998947 kubelet[1799]: E0302 13:22:37.998501 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:37.999338 containerd[1459]: time="2026-03-02T13:22:37.999243186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9q9s,Uid:75b03dda-5908-49d2-972a-0878fb0384b7,Namespace:kube-system,Attempt:0,}" Mar 2 13:22:38.018976 containerd[1459]: time="2026-03-02T13:22:38.017344590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:22:38.018976 containerd[1459]: time="2026-03-02T13:22:38.017421743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:22:38.018976 containerd[1459]: time="2026-03-02T13:22:38.017452599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:38.018976 containerd[1459]: time="2026-03-02T13:22:38.017986417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:38.058393 containerd[1459]: time="2026-03-02T13:22:38.056161735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:22:38.058393 containerd[1459]: time="2026-03-02T13:22:38.056383515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:22:38.058393 containerd[1459]: time="2026-03-02T13:22:38.056777744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:38.058393 containerd[1459]: time="2026-03-02T13:22:38.057570981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:22:38.061329 systemd[1]: Started cri-containerd-8e4eaa7d3cdd077fc5471d8f17dfdfa3b14e64c708448384a2ee7243475552e5.scope - libcontainer container 8e4eaa7d3cdd077fc5471d8f17dfdfa3b14e64c708448384a2ee7243475552e5. Mar 2 13:22:38.107194 systemd[1]: Started cri-containerd-485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34.scope - libcontainer container 485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34. Mar 2 13:22:38.135360 containerd[1459]: time="2026-03-02T13:22:38.135253591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-tg9np,Uid:ebafba4a-5e57-4473-8a47-241e5c8d3f69,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4eaa7d3cdd077fc5471d8f17dfdfa3b14e64c708448384a2ee7243475552e5\"" Mar 2 13:22:38.141958 kubelet[1799]: E0302 13:22:38.141444 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:38.146518 containerd[1459]: time="2026-03-02T13:22:38.146181393Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:22:38.165674 containerd[1459]: time="2026-03-02T13:22:38.165245701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9q9s,Uid:75b03dda-5908-49d2-972a-0878fb0384b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\"" Mar 2 13:22:38.167119 kubelet[1799]: E0302 13:22:38.167029 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:38.176239 containerd[1459]: time="2026-03-02T13:22:38.175959798Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:22:38.191521 containerd[1459]: time="2026-03-02T13:22:38.191336500Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20\"" Mar 2 13:22:38.191801 kubelet[1799]: E0302 13:22:38.191765 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:38.192325 containerd[1459]: time="2026-03-02T13:22:38.192253846Z" level=info msg="StartContainer for \"5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20\"" Mar 2 13:22:38.248368 systemd[1]: Started cri-containerd-5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20.scope - libcontainer container 5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20. Mar 2 13:22:38.294241 containerd[1459]: time="2026-03-02T13:22:38.294155611Z" level=info msg="StartContainer for \"5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20\" returns successfully" Mar 2 13:22:38.308984 systemd[1]: cri-containerd-5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20.scope: Deactivated successfully. Mar 2 13:22:38.326355 kubelet[1799]: E0302 13:22:38.325698 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:38.360222 containerd[1459]: time="2026-03-02T13:22:38.360166484Z" level=info msg="shim disconnected" id=5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20 namespace=k8s.io Mar 2 13:22:38.360501 containerd[1459]: time="2026-03-02T13:22:38.360424562Z" level=warning msg="cleaning up after shim disconnected" id=5c4d7bc3f9ded20e9bbe5813fc1ebafb33ca12a4785661d67d63cee90567cd20 namespace=k8s.io Mar 2 13:22:38.360501 containerd[1459]: time="2026-03-02T13:22:38.360486356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:38.384037 containerd[1459]: time="2026-03-02T13:22:38.383949198Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:22:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:22:39.193110 kubelet[1799]: E0302 13:22:39.193024 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:39.335213 kubelet[1799]: E0302 13:22:39.335098 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:39.343161 containerd[1459]: time="2026-03-02T13:22:39.343017581Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:22:39.362442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530973047.mount: Deactivated successfully. Mar 2 13:22:39.366480 containerd[1459]: time="2026-03-02T13:22:39.366407031Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5\"" Mar 2 13:22:39.367683 containerd[1459]: time="2026-03-02T13:22:39.367594934Z" level=info msg="StartContainer for \"f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5\"" Mar 2 13:22:39.433601 systemd[1]: Started cri-containerd-f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5.scope - libcontainer container f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5. Mar 2 13:22:39.496452 systemd[1]: cri-containerd-f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5.scope: Deactivated successfully. Mar 2 13:22:39.505508 containerd[1459]: time="2026-03-02T13:22:39.505259485Z" level=info msg="StartContainer for \"f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5\" returns successfully" Mar 2 13:22:39.562069 containerd[1459]: time="2026-03-02T13:22:39.561976367Z" level=info msg="shim disconnected" id=f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5 namespace=k8s.io Mar 2 13:22:39.562069 containerd[1459]: time="2026-03-02T13:22:39.562040475Z" level=warning msg="cleaning up after shim disconnected" id=f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5 namespace=k8s.io Mar 2 13:22:39.562069 containerd[1459]: time="2026-03-02T13:22:39.562050725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:39.583169 containerd[1459]: time="2026-03-02T13:22:39.583074240Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:22:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:22:39.645388 containerd[1459]: time="2026-03-02T13:22:39.645193142Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:39.646690 containerd[1459]: time="2026-03-02T13:22:39.646576872Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:22:39.647781 containerd[1459]: time="2026-03-02T13:22:39.647698108Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:22:39.649570 containerd[1459]: time="2026-03-02T13:22:39.649503912Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.503275211s" Mar 2 13:22:39.649678 containerd[1459]: time="2026-03-02T13:22:39.649565606Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:22:39.656401 containerd[1459]: time="2026-03-02T13:22:39.656339605Z" level=info msg="CreateContainer within sandbox \"8e4eaa7d3cdd077fc5471d8f17dfdfa3b14e64c708448384a2ee7243475552e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:22:39.685121 containerd[1459]: time="2026-03-02T13:22:39.684977252Z" level=info msg="CreateContainer within sandbox \"8e4eaa7d3cdd077fc5471d8f17dfdfa3b14e64c708448384a2ee7243475552e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8f4d8b9caa24154dde1204e2a011a19a8325f5fbb0b6c52f495ef54bc488c40b\"" Mar 2 13:22:39.686233 containerd[1459]: time="2026-03-02T13:22:39.686156495Z" level=info msg="StartContainer for \"8f4d8b9caa24154dde1204e2a011a19a8325f5fbb0b6c52f495ef54bc488c40b\"" Mar 2 13:22:39.732043 systemd[1]: Started cri-containerd-8f4d8b9caa24154dde1204e2a011a19a8325f5fbb0b6c52f495ef54bc488c40b.scope - libcontainer container 8f4d8b9caa24154dde1204e2a011a19a8325f5fbb0b6c52f495ef54bc488c40b. Mar 2 13:22:39.771138 containerd[1459]: time="2026-03-02T13:22:39.770734912Z" level=info msg="StartContainer for \"8f4d8b9caa24154dde1204e2a011a19a8325f5fbb0b6c52f495ef54bc488c40b\" returns successfully" Mar 2 13:22:39.914435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5dcf8aa84f74203c72f86b27e24beb203072b2233c7297f1c7427e2413a8dc5-rootfs.mount: Deactivated successfully. Mar 2 13:22:40.193941 kubelet[1799]: E0302 13:22:40.193697 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:40.216595 kubelet[1799]: E0302 13:22:40.216498 1799 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:22:40.341223 kubelet[1799]: E0302 13:22:40.340906 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:40.344099 kubelet[1799]: E0302 13:22:40.344017 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:40.352298 containerd[1459]: time="2026-03-02T13:22:40.352194499Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:22:40.355137 kubelet[1799]: I0302 13:22:40.354912 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-tg9np" podStartSLOduration=1.8487345 podStartE2EDuration="3.354774938s" podCreationTimestamp="2026-03-02 13:22:37 +0000 UTC" firstStartedPulling="2026-03-02 13:22:38.144689846 +0000 UTC m=+109.071274084" lastFinishedPulling="2026-03-02 13:22:39.650730294 +0000 UTC m=+110.577314522" observedRunningTime="2026-03-02 13:22:40.354476563 +0000 UTC m=+111.281060841" watchObservedRunningTime="2026-03-02 13:22:40.354774938 +0000 UTC m=+111.281359166" Mar 2 13:22:40.380705 containerd[1459]: time="2026-03-02T13:22:40.380267028Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02\"" Mar 2 13:22:40.381458 containerd[1459]: time="2026-03-02T13:22:40.381299321Z" level=info msg="StartContainer for \"40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02\"" Mar 2 13:22:40.428156 systemd[1]: Started cri-containerd-40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02.scope - libcontainer container 40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02. Mar 2 13:22:40.479396 containerd[1459]: time="2026-03-02T13:22:40.476429977Z" level=info msg="StartContainer for \"40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02\" returns successfully" Mar 2 13:22:40.479395 systemd[1]: cri-containerd-40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02.scope: Deactivated successfully. Mar 2 13:22:40.525519 containerd[1459]: time="2026-03-02T13:22:40.525418572Z" level=info msg="shim disconnected" id=40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02 namespace=k8s.io Mar 2 13:22:40.525519 containerd[1459]: time="2026-03-02T13:22:40.525500664Z" level=warning msg="cleaning up after shim disconnected" id=40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02 namespace=k8s.io Mar 2 13:22:40.525519 containerd[1459]: time="2026-03-02T13:22:40.525510262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:40.913781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40ad80162cf62f7939858cd4c174b54a95e2f3fb9987c1e84473f0a126ac0c02-rootfs.mount: Deactivated successfully. Mar 2 13:22:41.194979 kubelet[1799]: E0302 13:22:41.194603 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:41.350402 kubelet[1799]: E0302 13:22:41.350326 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:41.350759 kubelet[1799]: E0302 13:22:41.350565 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:41.356374 containerd[1459]: time="2026-03-02T13:22:41.356294850Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:22:41.382314 containerd[1459]: time="2026-03-02T13:22:41.382231992Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d\"" Mar 2 13:22:41.383397 containerd[1459]: time="2026-03-02T13:22:41.383223181Z" level=info msg="StartContainer for \"2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d\"" Mar 2 13:22:41.431073 systemd[1]: Started cri-containerd-2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d.scope - libcontainer container 2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d. Mar 2 13:22:41.466571 systemd[1]: cri-containerd-2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d.scope: Deactivated successfully. Mar 2 13:22:41.469386 containerd[1459]: time="2026-03-02T13:22:41.469190553Z" level=info msg="StartContainer for \"2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d\" returns successfully" Mar 2 13:22:41.506937 containerd[1459]: time="2026-03-02T13:22:41.506598571Z" level=info msg="shim disconnected" id=2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d namespace=k8s.io Mar 2 13:22:41.506937 containerd[1459]: time="2026-03-02T13:22:41.506769487Z" level=warning msg="cleaning up after shim disconnected" id=2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d namespace=k8s.io Mar 2 13:22:41.506937 containerd[1459]: time="2026-03-02T13:22:41.506883929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:22:41.533693 containerd[1459]: time="2026-03-02T13:22:41.533525531Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:22:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:22:41.913706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d88c5da68a57c198ad5502317b8c1d490d7b641294af14789d409797f00290d-rootfs.mount: Deactivated successfully. Mar 2 13:22:42.196030 kubelet[1799]: E0302 13:22:42.195508 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:42.357563 kubelet[1799]: E0302 13:22:42.357380 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:42.364300 containerd[1459]: time="2026-03-02T13:22:42.364119365Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:22:42.389617 containerd[1459]: time="2026-03-02T13:22:42.389512005Z" level=info msg="CreateContainer within sandbox \"485d2fe85b7491a1c4a649d2a5bed313bb80ffec74a9f83cd0160d4347c81a34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b\"" Mar 2 13:22:42.391190 containerd[1459]: time="2026-03-02T13:22:42.391031791Z" level=info msg="StartContainer for \"d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b\"" Mar 2 13:22:42.452198 systemd[1]: Started cri-containerd-d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b.scope - libcontainer container d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b. Mar 2 13:22:42.498874 containerd[1459]: time="2026-03-02T13:22:42.498534698Z" level=info msg="StartContainer for \"d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b\" returns successfully" Mar 2 13:22:42.914029 systemd[1]: run-containerd-runc-k8s.io-d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b-runc.qsk08P.mount: Deactivated successfully. Mar 2 13:22:43.066988 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 2 13:22:43.196741 kubelet[1799]: E0302 13:22:43.196421 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:43.278321 kubelet[1799]: I0302 13:22:43.278197 1799 setters.go:546] "Node became not ready" node="10.0.0.122" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:22:43Z","lastTransitionTime":"2026-03-02T13:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:22:43.369506 kubelet[1799]: E0302 13:22:43.368781 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:44.196941 kubelet[1799]: E0302 13:22:44.196636 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:44.370560 kubelet[1799]: E0302 13:22:44.370448 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:45.197646 kubelet[1799]: E0302 13:22:45.197577 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:46.199064 kubelet[1799]: E0302 13:22:46.199004 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:47.049039 systemd-networkd[1386]: lxc_health: Link UP Mar 2 13:22:47.060006 systemd-networkd[1386]: lxc_health: Gained carrier Mar 2 13:22:47.199896 kubelet[1799]: E0302 13:22:47.199741 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:47.997973 kubelet[1799]: E0302 13:22:47.997486 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:48.020466 kubelet[1799]: I0302 13:22:48.020158 1799 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-t9q9s" podStartSLOduration=11.020143147 podStartE2EDuration="11.020143147s" podCreationTimestamp="2026-03-02 13:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:22:43.394985657 +0000 UTC m=+114.321569895" watchObservedRunningTime="2026-03-02 13:22:48.020143147 +0000 UTC m=+118.946727375" Mar 2 13:22:48.201225 kubelet[1799]: E0302 13:22:48.200923 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:48.382293 kubelet[1799]: E0302 13:22:48.381459 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:48.547414 systemd-networkd[1386]: lxc_health: Gained IPv6LL Mar 2 13:22:48.697766 systemd[1]: run-containerd-runc-k8s.io-d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b-runc.Ota8mW.mount: Deactivated successfully. Mar 2 13:22:49.202198 kubelet[1799]: E0302 13:22:49.202152 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:49.384373 kubelet[1799]: E0302 13:22:49.384066 1799 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:22:49.887349 kubelet[1799]: E0302 13:22:49.887223 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:49.962595 containerd[1459]: time="2026-03-02T13:22:49.962353518Z" level=info msg="StopPodSandbox for \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\"" Mar 2 13:22:49.962595 containerd[1459]: time="2026-03-02T13:22:49.962489319Z" level=info msg="TearDown network for sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" successfully" Mar 2 13:22:49.962595 containerd[1459]: time="2026-03-02T13:22:49.962510258Z" level=info msg="StopPodSandbox for \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" returns successfully" Mar 2 13:22:49.963459 containerd[1459]: time="2026-03-02T13:22:49.963149865Z" level=info msg="RemovePodSandbox for \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\"" Mar 2 13:22:49.963459 containerd[1459]: time="2026-03-02T13:22:49.963183026Z" level=info msg="Forcibly stopping sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\"" Mar 2 13:22:49.963459 containerd[1459]: time="2026-03-02T13:22:49.963253436Z" level=info msg="TearDown network for sandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" successfully" Mar 2 13:22:49.982918 containerd[1459]: time="2026-03-02T13:22:49.981015556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:22:49.982918 containerd[1459]: time="2026-03-02T13:22:49.981136320Z" level=info msg="RemovePodSandbox \"a3087c42abbc4bb2b2555c297dd139c8b6f929ab33ab008b33dff22b1337d4e5\" returns successfully" Mar 2 13:22:50.204106 kubelet[1799]: E0302 13:22:50.203959 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:50.869538 systemd[1]: run-containerd-runc-k8s.io-d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b-runc.0qc5ld.mount: Deactivated successfully. Mar 2 13:22:50.975876 kubelet[1799]: E0302 13:22:50.972937 1799 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55482->127.0.0.1:33589: write tcp 127.0.0.1:55482->127.0.0.1:33589: write: broken pipe Mar 2 13:22:51.206356 kubelet[1799]: E0302 13:22:51.205691 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:52.207068 kubelet[1799]: E0302 13:22:52.206899 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:53.044522 systemd[1]: run-containerd-runc-k8s.io-d8d9470a92105f30acd064a5c0bfb20e76df14305d5da2b4685a42f28cdd2d0b-runc.lzIBbD.mount: Deactivated successfully. Mar 2 13:22:53.208461 kubelet[1799]: E0302 13:22:53.208300 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 2 13:22:54.209333 kubelet[1799]: E0302 13:22:54.209262 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"