Jan 24 00:54:10.059724 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:54:10.059745 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:54:10.059756 kernel: BIOS-provided physical RAM map: Jan 24 00:54:10.059762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:54:10.059767 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:54:10.059773 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:54:10.059779 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:54:10.059785 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:54:10.059790 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:54:10.059798 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:54:10.059803 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:54:10.059809 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:54:10.059814 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:54:10.059820 kernel: NX (Execute Disable) protection: active Jan 24 00:54:10.059826 kernel: APIC: Static calls initialized Jan 24 00:54:10.059835 kernel: SMBIOS 2.8 present. Jan 24 00:54:10.059841 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:54:10.059847 kernel: Hypervisor detected: KVM Jan 24 00:54:10.059852 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:54:10.059858 kernel: kvm-clock: using sched offset of 6940905571 cycles Jan 24 00:54:10.059864 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:54:10.059870 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:54:10.059877 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:54:10.059883 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:54:10.059891 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:54:10.059898 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:54:10.059904 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:54:10.059909 kernel: Using GB pages for direct mapping Jan 24 00:54:10.059915 kernel: ACPI: Early table checksum verification disabled Jan 24 00:54:10.059921 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:54:10.059927 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059933 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059939 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059947 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:54:10.059953 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059959 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059965 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059971 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:54:10.059977 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:54:10.059983 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:54:10.059994 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:54:10.060010 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:54:10.060021 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:54:10.060031 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:54:10.060111 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:54:10.060126 kernel: No NUMA configuration found Jan 24 00:54:10.060137 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:54:10.060154 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:54:10.060165 kernel: Zone ranges: Jan 24 00:54:10.060174 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:54:10.060180 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:54:10.060186 kernel: Normal empty Jan 24 00:54:10.060192 kernel: Movable zone start for each node Jan 24 00:54:10.060199 kernel: Early memory node ranges Jan 24 00:54:10.060205 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:54:10.060211 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:54:10.060217 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:54:10.060270 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:54:10.060283 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:54:10.060294 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:54:10.060305 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:54:10.060317 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:54:10.060325 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:54:10.060332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:54:10.060338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:54:10.060344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:54:10.060354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:54:10.060360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:54:10.060366 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:54:10.060372 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:54:10.060378 kernel: TSC deadline timer available Jan 24 00:54:10.060385 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:54:10.060391 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:54:10.060397 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:54:10.060403 kernel: kvm-guest: setup PV sched yield Jan 24 00:54:10.060412 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:54:10.060418 kernel: Booting paravirtualized kernel on KVM Jan 24 00:54:10.060424 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:54:10.060431 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:54:10.060437 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:54:10.060443 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:54:10.060449 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:54:10.060456 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:54:10.060462 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:54:10.060472 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:54:10.060478 kernel: random: crng init done Jan 24 00:54:10.060484 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:54:10.060491 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:54:10.060497 kernel: Fallback order for Node 0: 0 Jan 24 00:54:10.060503 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:54:10.060509 kernel: Policy zone: DMA32 Jan 24 00:54:10.060515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:54:10.060522 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:54:10.060530 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:54:10.060536 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:54:10.060543 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:54:10.060549 kernel: Dynamic Preempt: voluntary Jan 24 00:54:10.060555 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:54:10.060562 kernel: rcu: RCU event tracing is enabled. Jan 24 00:54:10.060568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:54:10.060575 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:54:10.060581 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:54:10.060589 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:54:10.060596 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:54:10.060602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:54:10.060608 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:54:10.060614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:54:10.060620 kernel: Console: colour VGA+ 80x25 Jan 24 00:54:10.060626 kernel: printk: console [ttyS0] enabled Jan 24 00:54:10.060632 kernel: ACPI: Core revision 20230628 Jan 24 00:54:10.060639 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:54:10.060647 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:54:10.060653 kernel: x2apic enabled Jan 24 00:54:10.060660 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:54:10.060668 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:54:10.060679 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:54:10.060691 kernel: kvm-guest: setup PV IPIs Jan 24 00:54:10.060701 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:54:10.060729 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:54:10.060739 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:54:10.060753 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:54:10.060762 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:54:10.060779 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:54:10.060791 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:54:10.060801 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:54:10.060813 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:54:10.060824 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:54:10.060839 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:54:10.060852 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:54:10.060863 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:54:10.060874 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:54:10.060885 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:54:10.060897 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:54:10.060908 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:54:10.060919 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:54:10.060933 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:54:10.060944 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:54:10.060955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:54:10.060966 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:54:10.060977 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:54:10.060988 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:54:10.061000 kernel: landlock: Up and running. Jan 24 00:54:10.061011 kernel: SELinux: Initializing. Jan 24 00:54:10.061025 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:54:10.061039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:54:10.061112 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:54:10.061119 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:54:10.061126 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:54:10.061133 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:54:10.061139 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:54:10.061146 kernel: signal: max sigframe size: 1776 Jan 24 00:54:10.061152 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:54:10.061159 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:54:10.061169 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:54:10.061175 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:54:10.061182 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:54:10.061188 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:54:10.061195 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:54:10.061201 kernel: smpboot: Max logical packages: 1 Jan 24 00:54:10.061208 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:54:10.061215 kernel: devtmpfs: initialized Jan 24 00:54:10.061251 kernel: x86/mm: Memory block size: 128MB Jan 24 00:54:10.061262 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:54:10.061268 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:54:10.061275 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:54:10.061282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:54:10.061288 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:54:10.061295 kernel: audit: type=2000 audit(1769216047.991:1): state=initialized audit_enabled=0 res=1 Jan 24 00:54:10.061301 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:54:10.061308 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:54:10.061314 kernel: cpuidle: using governor menu Jan 24 00:54:10.061323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:54:10.061330 kernel: dca service started, version 1.12.1 Jan 24 00:54:10.061336 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:54:10.061343 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:54:10.061349 kernel: PCI: Using configuration type 1 for base access Jan 24 00:54:10.061356 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:54:10.061362 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:54:10.061369 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:54:10.061375 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:54:10.061384 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:54:10.061391 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:54:10.061397 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:54:10.061404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:54:10.061410 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:54:10.061417 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:54:10.061423 kernel: ACPI: Interpreter enabled Jan 24 00:54:10.061429 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:54:10.061436 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:54:10.061445 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:54:10.061451 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:54:10.061458 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:54:10.061464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:54:10.061656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:54:10.061789 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:54:10.061912 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:54:10.061926 kernel: PCI host bridge to bus 0000:00 Jan 24 00:54:10.062105 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:54:10.062319 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:54:10.062437 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:54:10.062547 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:54:10.062655 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:54:10.062762 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:54:10.062877 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:54:10.063013 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:54:10.063264 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:54:10.063398 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:54:10.063517 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:54:10.063636 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:54:10.063753 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:54:10.063890 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:54:10.064009 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:54:10.064259 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:54:10.064400 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:54:10.064543 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:54:10.064724 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:54:10.064889 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:54:10.065297 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:54:10.065460 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:54:10.065584 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:54:10.065702 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:54:10.065821 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:54:10.065940 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:54:10.066156 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:54:10.066343 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:54:10.066502 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:54:10.066660 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:54:10.066781 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:54:10.066908 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:54:10.067027 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:54:10.067040 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:54:10.067104 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:54:10.067116 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:54:10.067129 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:54:10.067141 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:54:10.067148 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:54:10.067154 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:54:10.067161 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:54:10.067168 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:54:10.067178 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:54:10.067185 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:54:10.067191 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:54:10.067198 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:54:10.067205 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:54:10.067212 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:54:10.067218 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:54:10.067254 kernel: iommu: Default domain type: Translated Jan 24 00:54:10.067261 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:54:10.067271 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:54:10.067277 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:54:10.067284 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:54:10.067291 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:54:10.067424 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:54:10.067544 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:54:10.067662 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:54:10.067671 kernel: vgaarb: loaded Jan 24 00:54:10.067681 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:54:10.067688 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:54:10.067695 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:54:10.067702 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:54:10.067708 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:54:10.067715 kernel: pnp: PnP ACPI init Jan 24 00:54:10.067862 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:54:10.067883 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:54:10.067894 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:54:10.067912 kernel: NET: Registered PF_INET protocol family Jan 24 00:54:10.067924 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:54:10.067935 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:54:10.067948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:54:10.067959 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:54:10.067970 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:54:10.067982 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:54:10.067993 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:54:10.068008 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:54:10.068020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:54:10.068032 kernel: NET: Registered PF_XDP protocol family Jan 24 00:54:10.068281 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:54:10.068478 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:54:10.068593 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:54:10.068702 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:54:10.068811 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:54:10.068947 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:54:10.068964 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:54:10.068971 kernel: Initialise system trusted keyrings Jan 24 00:54:10.068978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:54:10.068985 kernel: Key type asymmetric registered Jan 24 00:54:10.068991 kernel: Asymmetric key parser 'x509' registered Jan 24 00:54:10.068998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:54:10.069005 kernel: io scheduler mq-deadline registered Jan 24 00:54:10.069011 kernel: io scheduler kyber registered Jan 24 00:54:10.069018 kernel: io scheduler bfq registered Jan 24 00:54:10.069027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:54:10.069035 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:54:10.069100 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:54:10.069109 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:54:10.069116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:54:10.069123 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:54:10.069130 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:54:10.069136 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:54:10.069143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:54:10.069321 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:54:10.069333 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:54:10.069447 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:54:10.069561 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:54:09 UTC (1769216049) Jan 24 00:54:10.069673 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:54:10.069683 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:54:10.069690 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:54:10.069697 kernel: Segment Routing with IPv6 Jan 24 00:54:10.069708 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:54:10.069715 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:54:10.069722 kernel: Key type dns_resolver registered Jan 24 00:54:10.069728 kernel: IPI shorthand broadcast: enabled Jan 24 00:54:10.069735 kernel: sched_clock: Marking stable (1291022317, 536606302)->(2393042134, -565413515) Jan 24 00:54:10.069742 kernel: registered taskstats version 1 Jan 24 00:54:10.069748 kernel: Loading compiled-in X.509 certificates Jan 24 00:54:10.069755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:54:10.069762 kernel: Key type .fscrypt registered Jan 24 00:54:10.069770 kernel: Key type fscrypt-provisioning registered Jan 24 00:54:10.069777 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:54:10.069789 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:54:10.069801 kernel: ima: No architecture policies found Jan 24 00:54:10.069814 kernel: clk: Disabling unused clocks Jan 24 00:54:10.069824 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:54:10.069835 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:54:10.069848 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:54:10.069865 kernel: Run /init as init process Jan 24 00:54:10.069877 kernel: with arguments: Jan 24 00:54:10.069890 kernel: /init Jan 24 00:54:10.069902 kernel: with environment: Jan 24 00:54:10.069911 kernel: HOME=/ Jan 24 00:54:10.069922 kernel: TERM=linux Jan 24 00:54:10.069937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:54:10.069951 systemd[1]: Detected virtualization kvm. Jan 24 00:54:10.069970 systemd[1]: Detected architecture x86-64. Jan 24 00:54:10.069981 systemd[1]: Running in initrd. Jan 24 00:54:10.069994 systemd[1]: No hostname configured, using default hostname. Jan 24 00:54:10.070004 systemd[1]: Hostname set to . Jan 24 00:54:10.070017 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:54:10.070029 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:54:10.070164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:54:10.070184 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:54:10.070202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:54:10.070214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:54:10.070268 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:54:10.070282 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:54:10.070296 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:54:10.070309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:54:10.070320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:54:10.070338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:54:10.070351 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:54:10.070363 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:54:10.070375 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:54:10.070406 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:54:10.070425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:54:10.070441 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:54:10.070454 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:54:10.070468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:54:10.070479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:54:10.070491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:54:10.070504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:54:10.070516 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:54:10.070528 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:54:10.070541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:54:10.070559 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:54:10.070570 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:54:10.070582 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:54:10.070594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:54:10.070606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:54:10.070650 systemd-journald[195]: Collecting audit messages is disabled. Jan 24 00:54:10.070687 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:54:10.070701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:54:10.070715 systemd-journald[195]: Journal started Jan 24 00:54:10.070744 systemd-journald[195]: Runtime Journal (/run/log/journal/f80a5cfd815a4dc59c06a0c823ceb0dd) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:54:10.076651 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:54:10.079719 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:54:10.083795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:54:10.083894 systemd-modules-load[196]: Inserted module 'overlay' Jan 24 00:54:10.088208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:54:10.101852 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:54:10.109519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:54:10.116782 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:54:10.136123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:54:10.140732 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:54:10.147124 kernel: Bridge firewalling registered Jan 24 00:54:10.147108 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 24 00:54:10.148112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:54:10.292757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:54:10.295661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:10.307104 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:54:10.327208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:54:10.338321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:54:10.341345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:54:10.348889 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:54:10.367365 dracut-cmdline[232]: dracut-dracut-053 Jan 24 00:54:10.370014 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:54:10.397361 systemd-resolved[226]: Positive Trust Anchors: Jan 24 00:54:10.397402 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:54:10.397450 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:54:10.401311 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 24 00:54:10.402974 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:54:10.406440 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:54:10.455122 kernel: SCSI subsystem initialized Jan 24 00:54:10.465139 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:54:10.477135 kernel: iscsi: registered transport (tcp) Jan 24 00:54:10.499506 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:54:10.499559 kernel: QLogic iSCSI HBA Driver Jan 24 00:54:10.552531 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:54:10.570347 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:54:10.611157 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:54:10.611217 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:54:10.614714 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:54:10.664181 kernel: raid6: avx2x4 gen() 31968 MB/s Jan 24 00:54:10.683132 kernel: raid6: avx2x2 gen() 23709 MB/s Jan 24 00:54:10.702601 kernel: raid6: avx2x1 gen() 16088 MB/s Jan 24 00:54:10.702693 kernel: raid6: using algorithm avx2x4 gen() 31968 MB/s Jan 24 00:54:10.723006 kernel: raid6: .... xor() 4845 MB/s, rmw enabled Jan 24 00:54:10.723137 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:54:10.751169 kernel: xor: automatically using best checksumming function avx Jan 24 00:54:10.927162 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:54:10.939622 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:54:10.950392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:54:10.967869 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 24 00:54:10.974467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:54:10.990337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:54:11.012596 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 24 00:54:11.052656 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:54:11.073440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:54:11.147479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:54:11.159305 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:54:11.175450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:54:11.182205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:54:11.189257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:54:11.196224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:54:11.212278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:54:11.223628 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:54:11.223849 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:54:11.230773 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:54:11.231555 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:54:11.246806 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:54:11.246828 kernel: GPT:9289727 != 19775487 Jan 24 00:54:11.246839 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:54:11.246849 kernel: GPT:9289727 != 19775487 Jan 24 00:54:11.246858 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:54:11.246867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:54:11.244886 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:54:11.244985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:54:11.257917 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:54:11.261121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:54:11.261360 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:11.268016 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:54:11.283087 kernel: libata version 3.00 loaded. Jan 24 00:54:11.288149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:54:11.300138 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:54:11.300169 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:54:11.303080 kernel: AES CTR mode by8 optimization enabled Jan 24 00:54:11.303102 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:54:11.316113 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (470) Jan 24 00:54:11.316143 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:54:11.319091 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:54:11.320829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:54:11.473159 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Jan 24 00:54:11.473188 kernel: scsi host0: ahci Jan 24 00:54:11.473433 kernel: scsi host1: ahci Jan 24 00:54:11.476580 kernel: scsi host2: ahci Jan 24 00:54:11.476734 kernel: scsi host3: ahci Jan 24 00:54:11.476890 kernel: scsi host4: ahci Jan 24 00:54:11.477040 kernel: scsi host5: ahci Jan 24 00:54:11.477292 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:54:11.477304 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:54:11.477313 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:54:11.477323 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:54:11.477332 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:54:11.477346 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:54:11.329515 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:54:11.473434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:11.487168 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:54:11.491570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:54:11.499602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:54:11.520335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:54:11.523160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:54:11.542326 disk-uuid[558]: Primary Header is updated. Jan 24 00:54:11.542326 disk-uuid[558]: Secondary Entries is updated. Jan 24 00:54:11.542326 disk-uuid[558]: Secondary Header is updated. Jan 24 00:54:11.551285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:54:11.551468 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:54:11.562807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:54:11.644767 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:54:11.644818 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:54:11.650288 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:54:11.650310 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:54:11.651155 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:54:11.653178 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:54:11.655095 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:54:11.658260 kernel: ata3.00: applying bridge limits Jan 24 00:54:11.659972 kernel: ata3.00: configured for UDMA/100 Jan 24 00:54:11.664221 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:54:11.712561 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:54:11.712785 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:54:11.725164 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:54:12.562136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:54:12.562589 disk-uuid[563]: The operation has completed successfully. Jan 24 00:54:12.594466 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:54:12.594688 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:54:12.622306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:54:12.631182 sh[594]: Success Jan 24 00:54:12.650130 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:54:12.693478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:54:12.711311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:54:12.713800 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:54:12.738358 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:54:12.738400 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:54:12.738419 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:54:12.742261 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:54:12.745136 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:54:12.755288 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:54:12.757427 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:54:12.768281 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:54:12.771868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:54:12.787654 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:54:12.787695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:54:12.787714 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:54:12.794121 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:54:12.807502 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:54:12.814353 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:54:12.821965 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:54:12.836363 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:54:13.236639 ignition[698]: Ignition 2.19.0 Jan 24 00:54:13.236653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:54:13.236665 ignition[698]: Stage: fetch-offline Jan 24 00:54:13.236768 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:13.236798 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:13.236982 ignition[698]: parsed url from cmdline: "" Jan 24 00:54:13.236990 ignition[698]: no config URL provided Jan 24 00:54:13.237000 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:54:13.258818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:54:13.237015 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:54:13.237094 ignition[698]: op(1): [started] loading QEMU firmware config module Jan 24 00:54:13.237100 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:54:13.287445 ignition[698]: op(1): [finished] loading QEMU firmware config module Jan 24 00:54:13.289160 ignition[698]: parsing config with SHA512: 49533dd5cabd61dfccbd6ca5bdc794a2e6fe16f0b4f086a17c0aaa46b8a5f421e68cfcf3d846fb0c15dc795fe10775caf24eb07ace9a3ebf86614d2442d6b10f Jan 24 00:54:13.296683 systemd-networkd[782]: lo: Link UP Jan 24 00:54:13.296703 systemd-networkd[782]: lo: Gained carrier Jan 24 00:54:13.298786 systemd-networkd[782]: Enumeration completed Jan 24 00:54:13.298879 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:54:13.309487 ignition[698]: fetch-offline: fetch-offline passed Jan 24 00:54:13.301378 systemd[1]: Reached target network.target - Network. Jan 24 00:54:13.309631 ignition[698]: Ignition finished successfully Jan 24 00:54:13.302780 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:54:13.302785 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:54:13.304003 systemd-networkd[782]: eth0: Link UP Jan 24 00:54:13.304007 systemd-networkd[782]: eth0: Gained carrier Jan 24 00:54:13.304014 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:54:13.309129 unknown[698]: fetched base config from "system" Jan 24 00:54:13.309141 unknown[698]: fetched user config from "qemu" Jan 24 00:54:13.314458 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:54:13.319738 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:54:13.336415 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:54:13.668329 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:54:13.693646 ignition[785]: Ignition 2.19.0 Jan 24 00:54:13.693684 ignition[785]: Stage: kargs Jan 24 00:54:13.694100 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:13.694122 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:13.695739 ignition[785]: kargs: kargs passed Jan 24 00:54:13.695805 ignition[785]: Ignition finished successfully Jan 24 00:54:13.713623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:54:13.730442 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:54:13.800874 ignition[794]: Ignition 2.19.0 Jan 24 00:54:13.800899 ignition[794]: Stage: disks Jan 24 00:54:13.801169 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:13.801183 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:13.802021 ignition[794]: disks: disks passed Jan 24 00:54:13.802126 ignition[794]: Ignition finished successfully Jan 24 00:54:13.819123 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:54:13.822611 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:54:13.834741 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:54:13.836019 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:54:13.838834 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:54:13.861431 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:54:13.884454 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:54:13.968900 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:54:13.985462 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:54:14.007517 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:54:14.178107 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:54:14.178607 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:54:14.180790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:54:14.193234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:54:14.197139 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:54:14.202765 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:54:14.241362 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Jan 24 00:54:14.241421 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:54:14.241439 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:54:14.241454 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:54:14.241464 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:54:14.202825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:54:14.202858 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:54:14.214219 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:54:14.243882 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:54:14.272516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:54:14.337145 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:54:14.343359 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:54:14.350606 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:54:14.360571 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:54:14.492641 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:54:14.502495 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:54:14.508673 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:54:14.526424 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:54:14.532910 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:54:14.632554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:54:14.668113 ignition[925]: INFO : Ignition 2.19.0 Jan 24 00:54:14.668113 ignition[925]: INFO : Stage: mount Jan 24 00:54:14.673155 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:14.673155 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:14.673155 ignition[925]: INFO : mount: mount passed Jan 24 00:54:14.673155 ignition[925]: INFO : Ignition finished successfully Jan 24 00:54:14.686741 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:54:14.702294 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:54:15.091384 systemd-networkd[782]: eth0: Gained IPv6LL Jan 24 00:54:15.192387 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:54:15.204156 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jan 24 00:54:15.209177 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:54:15.209214 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:54:15.209232 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:54:15.217174 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:54:15.220006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:54:15.271214 ignition[955]: INFO : Ignition 2.19.0 Jan 24 00:54:15.271214 ignition[955]: INFO : Stage: files Jan 24 00:54:15.276143 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:15.276143 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:15.276143 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:54:15.276143 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:54:15.276143 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:54:15.294617 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:54:15.294617 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:54:15.294617 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:54:15.294617 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:54:15.277731 unknown[955]: wrote ssh authorized keys file for user: core Jan 24 00:54:15.566842 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 24 00:54:16.058012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:54:16.058012 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 24 00:54:16.071171 ignition[955]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:54:16.079350 ignition[955]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:54:16.079350 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 24 00:54:16.090697 ignition[955]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:54:16.131667 ignition[955]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:54:16.148176 ignition[955]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:54:16.154541 ignition[955]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:54:16.154541 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:54:16.154541 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:54:16.154541 ignition[955]: INFO : files: files passed Jan 24 00:54:16.154541 ignition[955]: INFO : Ignition finished successfully Jan 24 00:54:16.159196 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:54:16.177561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:54:16.192686 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:54:16.200596 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:54:16.203379 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:54:16.210543 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:54:16.214680 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:54:16.214680 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:54:16.223397 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:54:16.228299 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:54:16.237162 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:54:16.254309 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:54:16.301426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:54:16.305309 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:54:16.316346 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:54:16.324457 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:54:16.332100 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:54:16.347389 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:54:16.374240 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:54:16.390425 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:54:16.405678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:54:16.413027 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:54:16.419989 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:54:16.425647 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:54:16.428332 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:54:16.435218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:54:16.440874 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:54:16.446234 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:54:16.452509 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:54:16.458834 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:54:16.465360 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:54:16.471370 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:54:16.477953 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:54:16.483577 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:54:16.488912 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:54:16.493316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:54:16.495948 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:54:16.501784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:54:16.510015 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:54:16.518937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:54:16.522715 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:54:16.533173 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:54:16.536977 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:54:16.545595 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:54:16.549793 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:54:16.558732 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:54:16.565450 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:54:16.569550 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:54:16.577146 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:54:16.582040 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:54:16.586870 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:54:16.589102 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:54:16.595097 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:54:16.597589 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:54:16.605214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:54:16.609625 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:54:16.619241 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:54:16.622817 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:54:16.642449 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:54:16.650113 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:54:16.653941 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:54:16.662994 ignition[1011]: INFO : Ignition 2.19.0 Jan 24 00:54:16.662994 ignition[1011]: INFO : Stage: umount Jan 24 00:54:16.662994 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:54:16.662994 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:54:16.662994 ignition[1011]: INFO : umount: umount passed Jan 24 00:54:16.662994 ignition[1011]: INFO : Ignition finished successfully Jan 24 00:54:16.695527 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:54:16.702335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:54:16.706443 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:54:16.716243 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:54:16.719237 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:54:16.731426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:54:16.732351 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:54:16.732484 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:54:16.739250 systemd[1]: Stopped target network.target - Network. Jan 24 00:54:16.744935 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:54:16.745090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:54:16.751348 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:54:16.751446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:54:16.755760 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:54:16.755838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:54:16.772137 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:54:16.772245 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:54:16.773927 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:54:16.781146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:54:16.787426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:54:16.787576 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:54:16.789174 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 24 00:54:16.795468 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:54:16.795662 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:54:16.799610 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:54:16.799684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:54:16.818554 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:54:16.823789 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:54:16.823898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:54:16.831409 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:54:16.840449 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:54:16.840573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:54:16.849653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:54:16.849790 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:54:16.854946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:54:16.855009 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:54:16.856013 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:54:16.856127 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:54:16.875830 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:54:16.876025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:54:16.878979 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:54:16.879147 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:54:16.880321 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:54:16.880412 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:54:16.896657 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:54:16.896953 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:54:16.906681 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:54:16.906765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:54:16.913462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:54:16.913536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:54:16.916848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:54:16.916931 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:54:16.927434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:54:16.927539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:54:16.938731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:54:16.938820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:54:16.960370 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:54:16.964658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:54:16.964731 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:54:16.971665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:54:16.971740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:16.974502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:54:16.974659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:54:16.982827 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:54:16.991974 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:54:17.061187 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 24 00:54:17.011025 systemd[1]: Switching root. Jan 24 00:54:17.063420 systemd-journald[195]: Journal stopped Jan 24 00:54:18.439257 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:54:18.439386 kernel: SELinux: policy capability open_perms=1 Jan 24 00:54:18.439405 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:54:18.439421 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:54:18.439437 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:54:18.439452 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:54:18.439468 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:54:18.439485 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:54:18.439507 kernel: audit: type=1403 audit(1769216057.238:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:54:18.439530 systemd[1]: Successfully loaded SELinux policy in 54.064ms. Jan 24 00:54:18.439561 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.788ms. Jan 24 00:54:18.439579 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:54:18.439596 systemd[1]: Detected virtualization kvm. Jan 24 00:54:18.439612 systemd[1]: Detected architecture x86-64. Jan 24 00:54:18.439628 systemd[1]: Detected first boot. Jan 24 00:54:18.439644 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:54:18.439660 zram_generator::config[1055]: No configuration found. Jan 24 00:54:18.439682 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:54:18.439699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:54:18.439721 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:54:18.439742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:54:18.439761 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:54:18.439782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:54:18.439799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:54:18.439815 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:54:18.439832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:54:18.439848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:54:18.439865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:54:18.439881 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:54:18.439898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:54:18.439914 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:54:18.439934 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:54:18.439951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:54:18.439968 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:54:18.439993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:54:18.440010 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:54:18.440026 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:54:18.440091 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:54:18.440114 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:54:18.440132 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:54:18.440159 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:54:18.440177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:54:18.440194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:54:18.440211 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:54:18.440227 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:54:18.440244 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:54:18.440261 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:54:18.440282 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:54:18.440337 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:54:18.440355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:54:18.440372 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:54:18.440389 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:54:18.440408 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:54:18.440425 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:54:18.440442 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:18.440458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:54:18.440479 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:54:18.440496 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:54:18.440513 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:54:18.440530 systemd[1]: Reached target machines.target - Containers. Jan 24 00:54:18.440546 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:54:18.440565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:54:18.440583 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:54:18.440602 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:54:18.440620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:54:18.440642 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:54:18.440658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:54:18.440675 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:54:18.440691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:54:18.440708 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:54:18.440724 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:54:18.440740 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:54:18.440757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:54:18.440776 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:54:18.440793 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:54:18.440809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:54:18.440826 kernel: ACPI: bus type drm_connector registered Jan 24 00:54:18.440842 kernel: fuse: init (API version 7.39) Jan 24 00:54:18.440859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:54:18.440877 kernel: loop: module loaded Jan 24 00:54:18.440892 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:54:18.440909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:54:18.440931 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:54:18.440977 systemd-journald[1139]: Collecting audit messages is disabled. Jan 24 00:54:18.441009 systemd[1]: Stopped verity-setup.service. Jan 24 00:54:18.441027 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:18.441094 systemd-journald[1139]: Journal started Jan 24 00:54:18.441131 systemd-journald[1139]: Runtime Journal (/run/log/journal/f80a5cfd815a4dc59c06a0c823ceb0dd) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:54:17.916977 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:54:17.940689 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:54:17.941461 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:54:17.941876 systemd[1]: systemd-journald.service: Consumed 1.306s CPU time. Jan 24 00:54:18.455710 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:54:18.456959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:54:18.460812 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:54:18.465331 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:54:18.469403 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:54:18.473210 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:54:18.476927 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:54:18.480161 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:54:18.483768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:54:18.487713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:54:18.488002 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:54:18.493275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:54:18.493598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:54:18.497509 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:54:18.497742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:54:18.501928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:54:18.502197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:54:18.506871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:54:18.507120 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:54:18.511579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:54:18.511804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:54:18.515927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:54:18.520711 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:54:18.527258 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:54:18.547985 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:54:18.559252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:54:18.567584 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:54:18.572821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:54:18.572905 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:54:18.578613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:54:18.590599 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:54:18.599446 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:54:18.603790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:54:18.606216 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:54:18.612830 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:54:18.614771 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:54:18.618953 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:54:18.625757 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:54:18.635396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:54:18.644627 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:54:18.896332 systemd-journald[1139]: Time spent on flushing to /var/log/journal/f80a5cfd815a4dc59c06a0c823ceb0dd is 24.369ms for 924 entries. Jan 24 00:54:18.896332 systemd-journald[1139]: System Journal (/var/log/journal/f80a5cfd815a4dc59c06a0c823ceb0dd) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:54:18.980875 systemd-journald[1139]: Received client request to flush runtime journal. Jan 24 00:54:18.980935 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:54:18.650345 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:54:18.892866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:54:18.903966 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:54:18.907531 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:54:18.914458 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:54:18.923464 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:54:18.936674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:54:18.965270 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:54:18.978524 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:54:18.982317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:54:18.989549 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:54:19.004726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:54:19.006159 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:54:19.028608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:54:19.031996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:54:19.032916 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:54:19.038541 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:54:19.058147 kernel: loop1: detected capacity change from 0 to 219144 Jan 24 00:54:19.071260 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 24 00:54:19.071278 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 24 00:54:19.083527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:54:19.229502 kernel: loop2: detected capacity change from 0 to 140768 Jan 24 00:54:19.305157 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:54:19.377602 kernel: loop4: detected capacity change from 0 to 219144 Jan 24 00:54:19.415238 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:54:19.503822 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:54:19.511858 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 24 00:54:19.706805 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:54:19.707789 systemd[1]: Reloading... Jan 24 00:54:19.843538 zram_generator::config[1218]: No configuration found. Jan 24 00:54:20.120700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:54:20.122597 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:54:20.357364 systemd[1]: Reloading finished in 648 ms. Jan 24 00:54:20.534013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:54:20.537982 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:54:20.578553 systemd[1]: Starting ensure-sysext.service... Jan 24 00:54:20.583263 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:54:20.599223 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:54:20.599243 systemd[1]: Reloading... Jan 24 00:54:20.623811 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:54:20.625340 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:54:20.626806 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:54:20.627196 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 24 00:54:20.627360 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 24 00:54:20.631344 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:54:20.631369 systemd-tmpfiles[1257]: Skipping /boot Jan 24 00:54:20.647541 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:54:20.647559 systemd-tmpfiles[1257]: Skipping /boot Jan 24 00:54:20.916910 zram_generator::config[1289]: No configuration found. Jan 24 00:54:21.160557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:54:21.204744 systemd[1]: Reloading finished in 604 ms. Jan 24 00:54:21.226332 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:54:21.244886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:54:21.288826 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:54:21.297183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:54:21.305378 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:54:21.314961 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:54:21.323822 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:54:21.336811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:54:21.360657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:54:21.375249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:54:21.387578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:21.387876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:54:21.390505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:54:21.395765 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 24 00:54:21.396813 augenrules[1345]: No rules Jan 24 00:54:21.399334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:54:21.411999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:54:21.417564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:54:21.422360 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:54:21.426389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:21.429329 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:54:21.434581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:54:21.439920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:54:21.447394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:54:21.460985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:54:21.465766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:54:21.466002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:54:21.472859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:54:21.473145 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:54:21.476969 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:54:21.485823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:54:21.501772 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:54:21.522925 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:54:21.523377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:21.523610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:54:21.663682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:54:21.671956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:54:21.684279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:54:21.700157 systemd-resolved[1333]: Positive Trust Anchors: Jan 24 00:54:21.700187 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:54:21.700231 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:54:21.701377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:54:21.707913 systemd-resolved[1333]: Defaulting to hostname 'linux'. Jan 24 00:54:21.708849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:54:21.714339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:54:21.721720 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:54:21.721767 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:54:21.722564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:54:21.727808 systemd[1]: Finished ensure-sysext.service. Jan 24 00:54:21.731969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:54:21.732452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:54:21.736843 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:54:21.737350 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:54:21.741206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:54:21.741573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:54:21.753694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1362) Jan 24 00:54:21.754639 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:54:21.754934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:54:21.890139 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:54:21.897139 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:54:21.916537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:54:21.924255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:54:21.924395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:54:21.939370 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:54:21.946488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:54:21.980438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:54:22.008156 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:54:22.008252 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:54:22.008650 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:54:22.008968 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:54:22.028700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:54:22.030414 systemd-networkd[1395]: lo: Link UP Jan 24 00:54:22.030423 systemd-networkd[1395]: lo: Gained carrier Jan 24 00:54:22.036615 systemd-networkd[1395]: Enumeration completed Jan 24 00:54:22.037420 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:54:22.037839 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:54:22.037846 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:54:22.042524 systemd[1]: Reached target network.target - Network. Jan 24 00:54:22.045505 systemd-networkd[1395]: eth0: Link UP Jan 24 00:54:22.045517 systemd-networkd[1395]: eth0: Gained carrier Jan 24 00:54:22.045540 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:54:22.386261 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:54:22.387497 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:54:22.414452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:54:22.439131 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:54:22.507686 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:54:22.510537 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:54:23.218900 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:54:23.219017 systemd-timesyncd[1404]: Initial clock synchronization to Sat 2026-01-24 00:54:23.218148 UTC. Jan 24 00:54:23.251780 systemd-resolved[1333]: Clock change detected. Flushing caches. Jan 24 00:54:23.435445 kernel: kvm_amd: TSC scaling supported Jan 24 00:54:23.435621 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:54:23.435647 kernel: kvm_amd: Nested Paging enabled Jan 24 00:54:23.437353 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:54:23.439409 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:54:23.499589 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:54:23.537602 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:54:23.606215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:54:23.638452 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:54:23.687036 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:54:23.768968 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:54:23.772968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:54:23.776108 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:54:23.779150 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:54:23.782520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:54:23.786138 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:54:23.789479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:54:23.792915 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:54:23.796248 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:54:23.796293 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:54:23.798790 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:54:23.803182 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:54:23.808088 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:54:23.824192 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:54:23.829235 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:54:23.833084 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:54:23.836416 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:54:23.839297 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:54:23.842187 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:54:23.842233 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:54:23.843641 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:54:23.844416 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:54:23.848280 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:54:23.854846 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:54:23.866453 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:54:23.873686 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:54:23.877880 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:54:23.885121 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:54:23.891356 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:54:23.898615 extend-filesystems[1429]: Found loop3 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found loop4 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found loop5 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found sr0 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda1 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda2 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda3 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found usr Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda4 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda6 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda7 Jan 24 00:54:23.898615 extend-filesystems[1429]: Found vda9 Jan 24 00:54:23.898615 extend-filesystems[1429]: Checking size of /dev/vda9 Jan 24 00:54:24.011863 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:54:24.011904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1363) Jan 24 00:54:24.011957 jq[1428]: false Jan 24 00:54:24.012273 extend-filesystems[1429]: Resized partition /dev/vda9 Jan 24 00:54:23.905944 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:54:23.977361 dbus-daemon[1427]: [system] SELinux support is enabled Jan 24 00:54:24.017878 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:54:23.908988 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:54:23.909760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:54:23.914103 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:54:24.021435 update_engine[1438]: I20260124 00:54:23.976626 1438 main.cc:92] Flatcar Update Engine starting Jan 24 00:54:24.021435 update_engine[1438]: I20260124 00:54:23.982023 1438 update_check_scheduler.cc:74] Next update check in 10m4s Jan 24 00:54:23.925370 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:54:24.023037 jq[1439]: true Jan 24 00:54:23.930318 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:54:23.938048 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:54:23.938800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:54:23.939323 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:54:23.939626 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:54:23.977919 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:54:24.016739 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:54:24.017119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:54:24.020174 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:54:24.027313 jq[1454]: true Jan 24 00:54:24.082645 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:54:24.084118 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:54:24.092281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:54:24.092322 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:54:24.118478 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:54:24.118478 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:54:24.118478 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:54:24.103686 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:54:24.141145 extend-filesystems[1429]: Resized filesystem in /dev/vda9 Jan 24 00:54:24.103794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:54:24.252464 systemd-networkd[1395]: eth0: Gained IPv6LL Jan 24 00:54:24.254100 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:54:24.262327 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:54:24.263308 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:54:24.276003 systemd-logind[1435]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:54:24.276047 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:54:24.276963 systemd-logind[1435]: New seat seat0. Jan 24 00:54:24.283671 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:54:24.290973 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:54:24.327128 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:54:24.336792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:24.343934 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:54:24.344920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:54:24.348711 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:54:24.354240 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:54:24.462282 kernel: hrtimer: interrupt took 8722139 ns Jan 24 00:54:24.520453 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:54:24.519693 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:54:24.527294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:54:24.532453 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:54:24.569014 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:54:24.569329 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:54:24.575085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:54:24.589124 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:54:24.593931 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:54:24.607857 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:54:24.608227 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:54:24.767157 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:54:24.848280 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:54:24.867634 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:54:24.884251 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:54:24.888124 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:54:25.292492 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:54:25.302938 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:55398.service - OpenSSH per-connection server daemon (10.0.0.1:55398). Jan 24 00:54:25.380161 sshd[1525]: Accepted publickey for core from 10.0.0.1 port 55398 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:25.381351 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:25.393204 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:54:25.400586 containerd[1453]: time="2026-01-24T00:54:25.398974033Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:54:25.408736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:54:25.536798 systemd-logind[1435]: New session 1 of user core. Jan 24 00:54:25.554510 containerd[1453]: time="2026-01-24T00:54:25.554245635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.559167 containerd[1453]: time="2026-01-24T00:54:25.558740683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:54:25.559656 containerd[1453]: time="2026-01-24T00:54:25.559289717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:54:25.559936 containerd[1453]: time="2026-01-24T00:54:25.559806892Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:54:25.561019 containerd[1453]: time="2026-01-24T00:54:25.561000480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:54:25.561173 containerd[1453]: time="2026-01-24T00:54:25.561156682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.561430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:54:25.562707 containerd[1453]: time="2026-01-24T00:54:25.562520298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:54:25.562764 containerd[1453]: time="2026-01-24T00:54:25.562750747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.563926 containerd[1453]: time="2026-01-24T00:54:25.563816206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:54:25.563988 containerd[1453]: time="2026-01-24T00:54:25.563972979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.564067 containerd[1453]: time="2026-01-24T00:54:25.564047678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:54:25.564251 containerd[1453]: time="2026-01-24T00:54:25.564104094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.567816 containerd[1453]: time="2026-01-24T00:54:25.567794622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.568688 containerd[1453]: time="2026-01-24T00:54:25.568668163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:54:25.568990 containerd[1453]: time="2026-01-24T00:54:25.568970236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:54:25.569112 containerd[1453]: time="2026-01-24T00:54:25.569094879Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:54:25.569261 containerd[1453]: time="2026-01-24T00:54:25.569245280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:54:25.569416 containerd[1453]: time="2026-01-24T00:54:25.569372257Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:54:25.579149 containerd[1453]: time="2026-01-24T00:54:25.579069240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:54:25.579370 containerd[1453]: time="2026-01-24T00:54:25.579318505Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:54:25.579415 containerd[1453]: time="2026-01-24T00:54:25.579388105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:54:25.579465 containerd[1453]: time="2026-01-24T00:54:25.579418772Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:54:25.579465 containerd[1453]: time="2026-01-24T00:54:25.579443228Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:54:25.579878 containerd[1453]: time="2026-01-24T00:54:25.579779305Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:54:25.581023 containerd[1453]: time="2026-01-24T00:54:25.580971892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:54:25.581330 containerd[1453]: time="2026-01-24T00:54:25.581266573Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:54:25.581330 containerd[1453]: time="2026-01-24T00:54:25.581317938Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:54:25.581413 containerd[1453]: time="2026-01-24T00:54:25.581364455Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:54:25.581413 containerd[1453]: time="2026-01-24T00:54:25.581391265Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581475 containerd[1453]: time="2026-01-24T00:54:25.581444404Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581509 containerd[1453]: time="2026-01-24T00:54:25.581481694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581604 containerd[1453]: time="2026-01-24T00:54:25.581529894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581695 containerd[1453]: time="2026-01-24T00:54:25.581659587Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581695 containerd[1453]: time="2026-01-24T00:54:25.581692007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581769 containerd[1453]: time="2026-01-24T00:54:25.581723936Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581769 containerd[1453]: time="2026-01-24T00:54:25.581751708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:54:25.581970 containerd[1453]: time="2026-01-24T00:54:25.581909302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.581970 containerd[1453]: time="2026-01-24T00:54:25.581952272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.581970 containerd[1453]: time="2026-01-24T00:54:25.581965678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582102 containerd[1453]: time="2026-01-24T00:54:25.582006303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582102 containerd[1453]: time="2026-01-24T00:54:25.582054664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582102 containerd[1453]: time="2026-01-24T00:54:25.582090861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582210 containerd[1453]: time="2026-01-24T00:54:25.582103515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582246 containerd[1453]: time="2026-01-24T00:54:25.582206818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582246 containerd[1453]: time="2026-01-24T00:54:25.582222907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582306 containerd[1453]: time="2026-01-24T00:54:25.582251270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582306 containerd[1453]: time="2026-01-24T00:54:25.582261980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.582674 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:54:25.585362 containerd[1453]: time="2026-01-24T00:54:25.584978710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.585362 containerd[1453]: time="2026-01-24T00:54:25.585187169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.585362 containerd[1453]: time="2026-01-24T00:54:25.585251990Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:54:25.585511 containerd[1453]: time="2026-01-24T00:54:25.585470087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.585511 containerd[1453]: time="2026-01-24T00:54:25.585503760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.585627 containerd[1453]: time="2026-01-24T00:54:25.585601692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:54:25.586412 containerd[1453]: time="2026-01-24T00:54:25.586362755Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:54:25.586465 containerd[1453]: time="2026-01-24T00:54:25.586446381Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:54:25.586504 containerd[1453]: time="2026-01-24T00:54:25.586469544Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:54:25.586504 containerd[1453]: time="2026-01-24T00:54:25.586490453Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:54:25.586630 containerd[1453]: time="2026-01-24T00:54:25.586508747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.586630 containerd[1453]: time="2026-01-24T00:54:25.586585661Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:54:25.586710 containerd[1453]: time="2026-01-24T00:54:25.586643709Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:54:25.586710 containerd[1453]: time="2026-01-24T00:54:25.586665660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:54:25.587818 containerd[1453]: time="2026-01-24T00:54:25.587492102Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:54:25.587818 containerd[1453]: time="2026-01-24T00:54:25.587798485Z" level=info msg="Connect containerd service" Jan 24 00:54:25.588090 containerd[1453]: time="2026-01-24T00:54:25.587934799Z" level=info msg="using legacy CRI server" Jan 24 00:54:25.588090 containerd[1453]: time="2026-01-24T00:54:25.587959174Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:54:25.588195 containerd[1453]: time="2026-01-24T00:54:25.588165640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:54:25.589815 containerd[1453]: time="2026-01-24T00:54:25.589717687Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:54:25.590118 containerd[1453]: time="2026-01-24T00:54:25.589987611Z" level=info msg="Start subscribing containerd event" Jan 24 00:54:25.590118 containerd[1453]: time="2026-01-24T00:54:25.590074934Z" level=info msg="Start recovering state" Jan 24 00:54:25.590268 containerd[1453]: time="2026-01-24T00:54:25.590235924Z" level=info msg="Start event monitor" Jan 24 00:54:25.590308 containerd[1453]: time="2026-01-24T00:54:25.590275989Z" level=info msg="Start snapshots syncer" Jan 24 00:54:25.590308 containerd[1453]: time="2026-01-24T00:54:25.590299102Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:54:25.590371 containerd[1453]: time="2026-01-24T00:54:25.590318137Z" level=info msg="Start streaming server" Jan 24 00:54:25.592068 containerd[1453]: time="2026-01-24T00:54:25.591918174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:54:25.592210 containerd[1453]: time="2026-01-24T00:54:25.592172810Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:54:25.597132 containerd[1453]: time="2026-01-24T00:54:25.594703133Z" level=info msg="containerd successfully booted in 0.197784s" Jan 24 00:54:25.596231 (systemd)[1532]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:54:25.594751 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:54:25.821753 systemd[1532]: Queued start job for default target default.target. Jan 24 00:54:25.833218 systemd[1532]: Created slice app.slice - User Application Slice. Jan 24 00:54:25.833276 systemd[1532]: Reached target paths.target - Paths. Jan 24 00:54:25.833290 systemd[1532]: Reached target timers.target - Timers. Jan 24 00:54:25.835468 systemd[1532]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:54:25.853002 systemd[1532]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:54:25.853212 systemd[1532]: Reached target sockets.target - Sockets. Jan 24 00:54:25.853265 systemd[1532]: Reached target basic.target - Basic System. Jan 24 00:54:25.853427 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:54:25.854654 systemd[1532]: Reached target default.target - Main User Target. Jan 24 00:54:25.854744 systemd[1532]: Startup finished in 243ms. Jan 24 00:54:25.878789 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:54:26.132965 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:55406.service - OpenSSH per-connection server daemon (10.0.0.1:55406). Jan 24 00:54:26.201768 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 55406 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:26.206079 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:26.213373 systemd-logind[1435]: New session 2 of user core. Jan 24 00:54:26.223979 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:54:26.389134 sshd[1544]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:26.397689 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:55406.service: Deactivated successfully. Jan 24 00:54:26.400064 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:54:26.402073 systemd-logind[1435]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:54:26.409052 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Jan 24 00:54:26.415187 systemd-logind[1435]: Removed session 2. Jan 24 00:54:26.659452 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:26.667782 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:26.675058 systemd-logind[1435]: New session 3 of user core. Jan 24 00:54:26.681768 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:54:26.894671 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:26.900743 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:55408.service: Deactivated successfully. Jan 24 00:54:26.903385 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:54:26.904382 systemd-logind[1435]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:54:26.906098 systemd-logind[1435]: Removed session 3. Jan 24 00:54:28.187189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:28.190964 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:54:28.192823 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:54:28.194356 systemd[1]: Startup finished in 1.441s (kernel) + 7.501s (initrd) + 10.298s (userspace) = 19.241s. Jan 24 00:54:30.452295 kubelet[1562]: E0124 00:54:30.452069 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:54:30.456000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:54:30.456320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:54:30.457174 systemd[1]: kubelet.service: Consumed 5.690s CPU time. Jan 24 00:54:36.919636 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:52332.service - OpenSSH per-connection server daemon (10.0.0.1:52332). Jan 24 00:54:36.969752 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 52332 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:36.972431 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:36.979275 systemd-logind[1435]: New session 4 of user core. Jan 24 00:54:36.988947 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:54:37.066733 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:37.094436 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:52332.service: Deactivated successfully. Jan 24 00:54:37.099185 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:54:37.101980 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:54:37.114003 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:52334.service - OpenSSH per-connection server daemon (10.0.0.1:52334). Jan 24 00:54:37.115439 systemd-logind[1435]: Removed session 4. Jan 24 00:54:37.187370 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 52334 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:37.189879 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:37.197258 systemd-logind[1435]: New session 5 of user core. Jan 24 00:54:37.206765 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:54:37.268814 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:37.280375 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:52334.service: Deactivated successfully. Jan 24 00:54:37.283146 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:54:37.287071 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:54:37.302328 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:52338.service - OpenSSH per-connection server daemon (10.0.0.1:52338). Jan 24 00:54:37.304525 systemd-logind[1435]: Removed session 5. Jan 24 00:54:37.453367 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 52338 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:37.456350 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:37.472493 systemd-logind[1435]: New session 6 of user core. Jan 24 00:54:37.483794 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:54:37.552403 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:37.566630 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:52338.service: Deactivated successfully. Jan 24 00:54:37.569492 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:54:37.570595 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:54:37.587988 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). Jan 24 00:54:37.593527 systemd-logind[1435]: Removed session 6. Jan 24 00:54:37.632865 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:37.635060 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:37.641735 systemd-logind[1435]: New session 7 of user core. Jan 24 00:54:37.651947 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:54:37.720990 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:54:37.721453 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:37.748641 sudo[1600]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:37.754728 sshd[1597]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:37.783488 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:52348.service: Deactivated successfully. Jan 24 00:54:37.786459 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:54:37.789052 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:54:37.807813 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:52356.service - OpenSSH per-connection server daemon (10.0.0.1:52356). Jan 24 00:54:37.810338 systemd-logind[1435]: Removed session 7. Jan 24 00:54:37.843425 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 52356 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:37.845638 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:37.851302 systemd-logind[1435]: New session 8 of user core. Jan 24 00:54:37.860994 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:54:37.924379 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:54:37.924869 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:37.930502 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:37.950869 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:54:37.951531 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:37.999518 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:54:38.014900 auditctl[1612]: No rules Jan 24 00:54:38.021080 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:54:38.021753 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:54:38.045651 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:54:38.094790 augenrules[1630]: No rules Jan 24 00:54:38.097249 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:54:38.099165 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:38.102617 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:38.116685 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:52356.service: Deactivated successfully. Jan 24 00:54:38.118232 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:54:38.119994 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:54:38.121332 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:52366.service - OpenSSH per-connection server daemon (10.0.0.1:52366). Jan 24 00:54:38.122673 systemd-logind[1435]: Removed session 8. Jan 24 00:54:38.192039 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 52366 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:38.194434 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:38.201012 systemd-logind[1435]: New session 9 of user core. Jan 24 00:54:38.210822 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:54:38.286994 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:54:38.287529 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:54:38.338674 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:54:38.386478 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:54:38.386979 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:54:40.180634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:40.180867 systemd[1]: kubelet.service: Consumed 5.690s CPU time. Jan 24 00:54:40.200052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:40.236379 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit session-9.scope)... Jan 24 00:54:40.236418 systemd[1]: Reloading... Jan 24 00:54:40.415816 zram_generator::config[1720]: No configuration found. Jan 24 00:54:40.693219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:54:40.827988 systemd[1]: Reloading finished in 590 ms. Jan 24 00:54:40.875686 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:54:40.875812 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:54:40.876145 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:40.892892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:54:41.196014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:54:41.202902 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:54:41.294077 kubelet[1765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:54:41.294077 kubelet[1765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:54:41.294452 kubelet[1765]: I0124 00:54:41.294220 1765 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:54:42.110269 kubelet[1765]: I0124 00:54:42.110088 1765 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:54:42.110269 kubelet[1765]: I0124 00:54:42.110202 1765 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:54:42.114799 kubelet[1765]: I0124 00:54:42.114716 1765 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:54:42.114799 kubelet[1765]: I0124 00:54:42.114761 1765 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:54:42.115749 kubelet[1765]: I0124 00:54:42.115639 1765 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:54:42.247407 kubelet[1765]: I0124 00:54:42.246857 1765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:54:42.261519 kubelet[1765]: E0124 00:54:42.261417 1765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:54:42.261686 kubelet[1765]: I0124 00:54:42.261631 1765 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:54:42.274744 kubelet[1765]: I0124 00:54:42.274663 1765 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:54:42.275496 kubelet[1765]: I0124 00:54:42.275396 1765 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:54:42.275799 kubelet[1765]: I0124 00:54:42.275450 1765 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.107","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:54:42.276085 kubelet[1765]: I0124 00:54:42.275853 1765 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:54:42.276085 kubelet[1765]: I0124 00:54:42.275870 1765 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:54:42.276085 kubelet[1765]: I0124 00:54:42.276081 1765 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:54:42.281286 kubelet[1765]: I0124 00:54:42.281179 1765 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:54:42.284002 kubelet[1765]: I0124 00:54:42.283916 1765 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:54:42.284088 kubelet[1765]: I0124 00:54:42.284053 1765 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:54:42.284396 kubelet[1765]: I0124 00:54:42.284310 1765 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:54:42.284396 kubelet[1765]: I0124 00:54:42.284394 1765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:54:42.284497 kubelet[1765]: E0124 00:54:42.284383 1765 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:42.284535 kubelet[1765]: E0124 00:54:42.284526 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:42.288713 kubelet[1765]: I0124 00:54:42.288622 1765 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:54:42.289440 kubelet[1765]: I0124 00:54:42.289419 1765 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:54:42.289657 kubelet[1765]: I0124 00:54:42.289450 1765 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:54:42.289716 kubelet[1765]: W0124 00:54:42.289698 1765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:54:42.291181 kubelet[1765]: E0124 00:54:42.291058 1765 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.107\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:54:42.291181 kubelet[1765]: E0124 00:54:42.291059 1765 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:54:42.296027 kubelet[1765]: I0124 00:54:42.295929 1765 server.go:1262] "Started kubelet" Jan 24 00:54:42.296600 kubelet[1765]: I0124 00:54:42.296163 1765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:54:42.300605 kubelet[1765]: I0124 00:54:42.297782 1765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:54:42.300605 kubelet[1765]: I0124 00:54:42.298152 1765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:54:42.300605 kubelet[1765]: I0124 00:54:42.298688 1765 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:54:42.302635 kubelet[1765]: I0124 00:54:42.302534 1765 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:54:42.303032 kubelet[1765]: I0124 00:54:42.303011 1765 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:54:42.306233 kubelet[1765]: I0124 00:54:42.306186 1765 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:54:42.306668 kubelet[1765]: I0124 00:54:42.306649 1765 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:54:42.306865 kubelet[1765]: E0124 00:54:42.306811 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:42.307659 kubelet[1765]: I0124 00:54:42.307615 1765 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:54:42.307900 kubelet[1765]: I0124 00:54:42.307854 1765 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:54:42.311790 kubelet[1765]: I0124 00:54:42.311727 1765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:54:42.312419 kubelet[1765]: E0124 00:54:42.312362 1765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.107\" not found" node="10.0.0.107" Jan 24 00:54:42.312878 kubelet[1765]: E0124 00:54:42.312854 1765 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:54:42.315198 kubelet[1765]: I0124 00:54:42.315140 1765 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:54:42.315198 kubelet[1765]: I0124 00:54:42.315175 1765 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:54:42.328080 kubelet[1765]: I0124 00:54:42.328033 1765 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:54:42.328080 kubelet[1765]: I0124 00:54:42.328066 1765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:54:42.328080 kubelet[1765]: I0124 00:54:42.328090 1765 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:54:42.331351 kubelet[1765]: I0124 00:54:42.331292 1765 policy_none.go:49] "None policy: Start" Jan 24 00:54:42.331351 kubelet[1765]: I0124 00:54:42.331355 1765 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:54:42.331457 kubelet[1765]: I0124 00:54:42.331373 1765 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:54:42.333602 kubelet[1765]: I0124 00:54:42.333500 1765 policy_none.go:47] "Start" Jan 24 00:54:42.342200 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:54:42.370509 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:54:42.376652 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:54:42.391447 kubelet[1765]: E0124 00:54:42.391365 1765 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:54:42.392513 kubelet[1765]: I0124 00:54:42.391742 1765 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:54:42.392513 kubelet[1765]: I0124 00:54:42.391764 1765 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:54:42.392674 kubelet[1765]: I0124 00:54:42.392620 1765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:54:42.394162 kubelet[1765]: E0124 00:54:42.394097 1765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:54:42.394219 kubelet[1765]: E0124 00:54:42.394178 1765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.107\" not found" Jan 24 00:54:42.430068 kubelet[1765]: I0124 00:54:42.429933 1765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:54:42.432332 kubelet[1765]: I0124 00:54:42.432230 1765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:54:42.432332 kubelet[1765]: I0124 00:54:42.432316 1765 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:54:42.432487 kubelet[1765]: I0124 00:54:42.432447 1765 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:54:42.433212 kubelet[1765]: E0124 00:54:42.432712 1765 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:54:42.506616 kubelet[1765]: I0124 00:54:42.506348 1765 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.107" Jan 24 00:54:42.513526 kubelet[1765]: I0124 00:54:42.513412 1765 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.107" Jan 24 00:54:42.513526 kubelet[1765]: E0124 00:54:42.513475 1765 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.0.0.107\": node \"10.0.0.107\" not found" Jan 24 00:54:42.525055 kubelet[1765]: E0124 00:54:42.525004 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:42.625677 kubelet[1765]: E0124 00:54:42.625436 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:42.716653 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 24 00:54:42.719627 sshd[1638]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:42.724121 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:52366.service: Deactivated successfully. Jan 24 00:54:42.726180 kubelet[1765]: E0124 00:54:42.726065 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:42.726138 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:54:42.726338 systemd[1]: session-9.scope: Consumed 2.105s CPU time, 75.9M memory peak, 0B memory swap peak. Jan 24 00:54:42.727168 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:54:42.728873 systemd-logind[1435]: Removed session 9. Jan 24 00:54:42.829655 kubelet[1765]: E0124 00:54:42.829205 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:42.930582 kubelet[1765]: E0124 00:54:42.930434 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.031406 kubelet[1765]: E0124 00:54:43.031352 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.139050 kubelet[1765]: E0124 00:54:43.138649 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.139050 kubelet[1765]: I0124 00:54:43.138666 1765 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:54:43.140505 kubelet[1765]: I0124 00:54:43.140333 1765 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:54:43.140505 kubelet[1765]: I0124 00:54:43.140333 1765 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:54:43.243232 kubelet[1765]: E0124 00:54:43.242063 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.286847 kubelet[1765]: E0124 00:54:43.286495 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:43.343228 kubelet[1765]: E0124 00:54:43.343156 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.448791 kubelet[1765]: E0124 00:54:43.448341 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.552627 kubelet[1765]: E0124 00:54:43.551120 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.652507 kubelet[1765]: E0124 00:54:43.652394 1765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.0.0.107\" not found" Jan 24 00:54:43.755153 kubelet[1765]: I0124 00:54:43.755117 1765 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 24 00:54:43.756073 containerd[1453]: time="2026-01-24T00:54:43.755957985Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:54:43.756681 kubelet[1765]: I0124 00:54:43.756437 1765 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 24 00:54:44.286517 kubelet[1765]: I0124 00:54:44.286272 1765 apiserver.go:52] "Watching apiserver" Jan 24 00:54:44.287111 kubelet[1765]: E0124 00:54:44.286852 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:44.298348 kubelet[1765]: E0124 00:54:44.298203 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:44.307158 systemd[1]: Created slice kubepods-besteffort-pod0cd55e2e_468f_4b09_9ed5_35cf6e734e76.slice - libcontainer container kubepods-besteffort-pod0cd55e2e_468f_4b09_9ed5_35cf6e734e76.slice. Jan 24 00:54:44.308437 kubelet[1765]: I0124 00:54:44.308372 1765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:54:44.323456 kubelet[1765]: I0124 00:54:44.323379 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-cni-net-dir\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323456 kubelet[1765]: I0124 00:54:44.323414 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-node-certs\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323456 kubelet[1765]: I0124 00:54:44.323435 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5wq\" (UniqueName: \"kubernetes.io/projected/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-kube-api-access-pg5wq\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323456 kubelet[1765]: I0124 00:54:44.323452 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7c5\" (UniqueName: \"kubernetes.io/projected/0599f80c-e14c-4f92-8838-e34d8d6742dd-kube-api-access-5q7c5\") pod \"csi-node-driver-6j86w\" (UID: \"0599f80c-e14c-4f92-8838-e34d8d6742dd\") " pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:44.323760 kubelet[1765]: I0124 00:54:44.323608 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f197284-3b6f-4316-8ffa-bc31197de925-kube-proxy\") pod \"kube-proxy-m5p76\" (UID: \"6f197284-3b6f-4316-8ffa-bc31197de925\") " pod="kube-system/kube-proxy-m5p76" Jan 24 00:54:44.323760 kubelet[1765]: I0124 00:54:44.323625 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-policysync\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323760 kubelet[1765]: I0124 00:54:44.323638 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-xtables-lock\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323760 kubelet[1765]: I0124 00:54:44.323651 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0599f80c-e14c-4f92-8838-e34d8d6742dd-kubelet-dir\") pod \"csi-node-driver-6j86w\" (UID: \"0599f80c-e14c-4f92-8838-e34d8d6742dd\") " pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:44.323760 kubelet[1765]: I0124 00:54:44.323664 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0599f80c-e14c-4f92-8838-e34d8d6742dd-socket-dir\") pod \"csi-node-driver-6j86w\" (UID: \"0599f80c-e14c-4f92-8838-e34d8d6742dd\") " pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:44.323936 kubelet[1765]: I0124 00:54:44.323695 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0599f80c-e14c-4f92-8838-e34d8d6742dd-varrun\") pod \"csi-node-driver-6j86w\" (UID: \"0599f80c-e14c-4f92-8838-e34d8d6742dd\") " pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:44.323936 kubelet[1765]: I0124 00:54:44.323709 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f197284-3b6f-4316-8ffa-bc31197de925-xtables-lock\") pod \"kube-proxy-m5p76\" (UID: \"6f197284-3b6f-4316-8ffa-bc31197de925\") " pod="kube-system/kube-proxy-m5p76" Jan 24 00:54:44.323936 kubelet[1765]: I0124 00:54:44.323721 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-cni-bin-dir\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323936 kubelet[1765]: I0124 00:54:44.323766 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-cni-log-dir\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.323936 kubelet[1765]: I0124 00:54:44.323801 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-flexvol-driver-host\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.324156 kubelet[1765]: I0124 00:54:44.323826 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-lib-modules\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.324156 kubelet[1765]: I0124 00:54:44.323846 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-tigera-ca-bundle\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.324156 kubelet[1765]: I0124 00:54:44.323866 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-var-lib-calico\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.324156 kubelet[1765]: I0124 00:54:44.323886 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0cd55e2e-468f-4b09-9ed5-35cf6e734e76-var-run-calico\") pod \"calico-node-mb9tg\" (UID: \"0cd55e2e-468f-4b09-9ed5-35cf6e734e76\") " pod="calico-system/calico-node-mb9tg" Jan 24 00:54:44.324156 kubelet[1765]: I0124 00:54:44.323905 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f197284-3b6f-4316-8ffa-bc31197de925-lib-modules\") pod \"kube-proxy-m5p76\" (UID: \"6f197284-3b6f-4316-8ffa-bc31197de925\") " pod="kube-system/kube-proxy-m5p76" Jan 24 00:54:44.324415 kubelet[1765]: I0124 00:54:44.323937 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0599f80c-e14c-4f92-8838-e34d8d6742dd-registration-dir\") pod \"csi-node-driver-6j86w\" (UID: \"0599f80c-e14c-4f92-8838-e34d8d6742dd\") " pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:44.324415 kubelet[1765]: I0124 00:54:44.324044 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb96x\" (UniqueName: \"kubernetes.io/projected/6f197284-3b6f-4316-8ffa-bc31197de925-kube-api-access-cb96x\") pod \"kube-proxy-m5p76\" (UID: \"6f197284-3b6f-4316-8ffa-bc31197de925\") " pod="kube-system/kube-proxy-m5p76" Jan 24 00:54:44.330784 systemd[1]: Created slice kubepods-besteffort-pod6f197284_3b6f_4316_8ffa_bc31197de925.slice - libcontainer container kubepods-besteffort-pod6f197284_3b6f_4316_8ffa_bc31197de925.slice. Jan 24 00:54:44.426960 kubelet[1765]: E0124 00:54:44.426911 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.426960 kubelet[1765]: W0124 00:54:44.426941 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.427349 kubelet[1765]: E0124 00:54:44.427003 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.427372 kubelet[1765]: E0124 00:54:44.427352 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.427372 kubelet[1765]: W0124 00:54:44.427366 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.427448 kubelet[1765]: E0124 00:54:44.427381 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.430694 kubelet[1765]: E0124 00:54:44.430647 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.430694 kubelet[1765]: W0124 00:54:44.430682 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.430848 kubelet[1765]: E0124 00:54:44.430707 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.440154 kubelet[1765]: E0124 00:54:44.440058 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.440154 kubelet[1765]: W0124 00:54:44.440083 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.440154 kubelet[1765]: E0124 00:54:44.440099 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.440412 kubelet[1765]: E0124 00:54:44.440328 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.440412 kubelet[1765]: W0124 00:54:44.440336 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.440412 kubelet[1765]: E0124 00:54:44.440345 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.442055 kubelet[1765]: E0124 00:54:44.441951 1765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:54:44.442055 kubelet[1765]: W0124 00:54:44.442007 1765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:54:44.442055 kubelet[1765]: E0124 00:54:44.442025 1765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:54:44.631414 kubelet[1765]: E0124 00:54:44.631197 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:44.633200 containerd[1453]: time="2026-01-24T00:54:44.633089222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb9tg,Uid:0cd55e2e-468f-4b09-9ed5-35cf6e734e76,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:44.636507 kubelet[1765]: E0124 00:54:44.636459 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:44.637327 containerd[1453]: time="2026-01-24T00:54:44.637259548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m5p76,Uid:6f197284-3b6f-4316-8ffa-bc31197de925,Namespace:kube-system,Attempt:0,}" Jan 24 00:54:45.187019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345560729.mount: Deactivated successfully. Jan 24 00:54:45.195515 containerd[1453]: time="2026-01-24T00:54:45.195424007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:54:45.196701 containerd[1453]: time="2026-01-24T00:54:45.196638113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:54:45.197377 containerd[1453]: time="2026-01-24T00:54:45.197266917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:54:45.198327 containerd[1453]: time="2026-01-24T00:54:45.198288744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:54:45.211355 containerd[1453]: time="2026-01-24T00:54:45.211271824Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:54:45.217428 containerd[1453]: time="2026-01-24T00:54:45.217352995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:54:45.218289 containerd[1453]: time="2026-01-24T00:54:45.218242028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.909291ms" Jan 24 00:54:45.219312 containerd[1453]: time="2026-01-24T00:54:45.219278733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.841513ms" Jan 24 00:54:45.288062 kubelet[1765]: E0124 00:54:45.287948 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:45.427668 containerd[1453]: time="2026-01-24T00:54:45.426616690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:45.428096 containerd[1453]: time="2026-01-24T00:54:45.427617368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:45.428096 containerd[1453]: time="2026-01-24T00:54:45.427685125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:45.428096 containerd[1453]: time="2026-01-24T00:54:45.427699321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:45.428096 containerd[1453]: time="2026-01-24T00:54:45.427801973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:45.428433 containerd[1453]: time="2026-01-24T00:54:45.428251762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:45.428433 containerd[1453]: time="2026-01-24T00:54:45.428268794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:45.428642 containerd[1453]: time="2026-01-24T00:54:45.428476011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:45.434629 kubelet[1765]: E0124 00:54:45.434468 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:45.503866 systemd[1]: Started cri-containerd-434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d.scope - libcontainer container 434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d. Jan 24 00:54:45.507277 systemd[1]: Started cri-containerd-a3d95cff1ee11e75ee90590dd7e6c6e642532e32f7480d2759c87a8a01cd7ee9.scope - libcontainer container a3d95cff1ee11e75ee90590dd7e6c6e642532e32f7480d2759c87a8a01cd7ee9. Jan 24 00:54:45.540591 containerd[1453]: time="2026-01-24T00:54:45.539848191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mb9tg,Uid:0cd55e2e-468f-4b09-9ed5-35cf6e734e76,Namespace:calico-system,Attempt:0,} returns sandbox id \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\"" Jan 24 00:54:45.541669 kubelet[1765]: E0124 00:54:45.541607 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:45.545100 containerd[1453]: time="2026-01-24T00:54:45.545040414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:54:45.551017 containerd[1453]: time="2026-01-24T00:54:45.550905040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m5p76,Uid:6f197284-3b6f-4316-8ffa-bc31197de925,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d95cff1ee11e75ee90590dd7e6c6e642532e32f7480d2759c87a8a01cd7ee9\"" Jan 24 00:54:45.552927 kubelet[1765]: E0124 00:54:45.552895 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:45.987499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112030658.mount: Deactivated successfully. Jan 24 00:54:46.065136 containerd[1453]: time="2026-01-24T00:54:46.065038415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:46.066127 containerd[1453]: time="2026-01-24T00:54:46.066058455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:54:46.067695 containerd[1453]: time="2026-01-24T00:54:46.067637692Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:46.073587 containerd[1453]: time="2026-01-24T00:54:46.073441626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:46.074522 containerd[1453]: time="2026-01-24T00:54:46.074445460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 529.344583ms" Jan 24 00:54:46.074522 containerd[1453]: time="2026-01-24T00:54:46.074489391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:54:46.076366 containerd[1453]: time="2026-01-24T00:54:46.076205365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:54:46.081691 containerd[1453]: time="2026-01-24T00:54:46.081622981Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:54:46.101356 containerd[1453]: time="2026-01-24T00:54:46.101267840Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219\"" Jan 24 00:54:46.102628 containerd[1453]: time="2026-01-24T00:54:46.102490321Z" level=info msg="StartContainer for \"05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219\"" Jan 24 00:54:46.151896 systemd[1]: Started cri-containerd-05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219.scope - libcontainer container 05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219. Jan 24 00:54:46.183771 containerd[1453]: time="2026-01-24T00:54:46.183722068Z" level=info msg="StartContainer for \"05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219\" returns successfully" Jan 24 00:54:46.196671 systemd[1]: cri-containerd-05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219.scope: Deactivated successfully. Jan 24 00:54:46.254183 containerd[1453]: time="2026-01-24T00:54:46.253855943Z" level=info msg="shim disconnected" id=05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219 namespace=k8s.io Jan 24 00:54:46.254183 containerd[1453]: time="2026-01-24T00:54:46.253971628Z" level=warning msg="cleaning up after shim disconnected" id=05c8a0c482818df6be91b07e4a546a495de624ffab3e87d114635b56afef2219 namespace=k8s.io Jan 24 00:54:46.254183 containerd[1453]: time="2026-01-24T00:54:46.254011523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:46.288793 kubelet[1765]: E0124 00:54:46.288676 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:46.477729 kubelet[1765]: E0124 00:54:46.477618 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:47.015358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233702862.mount: Deactivated successfully. Jan 24 00:54:47.296827 kubelet[1765]: E0124 00:54:47.293409 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:47.455094 kubelet[1765]: E0124 00:54:47.453379 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:48.296791 kubelet[1765]: E0124 00:54:48.296452 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:48.412618 containerd[1453]: time="2026-01-24T00:54:48.412457788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:48.413242 containerd[1453]: time="2026-01-24T00:54:48.413188848Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:54:48.420153 containerd[1453]: time="2026-01-24T00:54:48.420068484Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:48.422907 containerd[1453]: time="2026-01-24T00:54:48.422852011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:48.424260 containerd[1453]: time="2026-01-24T00:54:48.424211259Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.347967472s" Jan 24 00:54:48.424395 containerd[1453]: time="2026-01-24T00:54:48.424263677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:54:48.425685 containerd[1453]: time="2026-01-24T00:54:48.425644008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:54:48.429648 containerd[1453]: time="2026-01-24T00:54:48.429594545Z" level=info msg="CreateContainer within sandbox \"a3d95cff1ee11e75ee90590dd7e6c6e642532e32f7480d2759c87a8a01cd7ee9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:54:48.451477 containerd[1453]: time="2026-01-24T00:54:48.451375174Z" level=info msg="CreateContainer within sandbox \"a3d95cff1ee11e75ee90590dd7e6c6e642532e32f7480d2759c87a8a01cd7ee9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aca2ca6d9e32a7cf7d9e346a8fb9adefc778edf6440fb7a610e454cbc84afa3f\"" Jan 24 00:54:48.452500 containerd[1453]: time="2026-01-24T00:54:48.452405813Z" level=info msg="StartContainer for \"aca2ca6d9e32a7cf7d9e346a8fb9adefc778edf6440fb7a610e454cbc84afa3f\"" Jan 24 00:54:48.539881 systemd[1]: Started cri-containerd-aca2ca6d9e32a7cf7d9e346a8fb9adefc778edf6440fb7a610e454cbc84afa3f.scope - libcontainer container aca2ca6d9e32a7cf7d9e346a8fb9adefc778edf6440fb7a610e454cbc84afa3f. Jan 24 00:54:48.604967 containerd[1453]: time="2026-01-24T00:54:48.604801235Z" level=info msg="StartContainer for \"aca2ca6d9e32a7cf7d9e346a8fb9adefc778edf6440fb7a610e454cbc84afa3f\" returns successfully" Jan 24 00:54:49.579462 kubelet[1765]: E0124 00:54:49.309699 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:49.579462 kubelet[1765]: E0124 00:54:49.503640 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:49.594873 kubelet[1765]: E0124 00:54:49.593514 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:49.608614 kubelet[1765]: I0124 00:54:49.605702 1765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m5p76" podStartSLOduration=4.734112593 podStartE2EDuration="7.605670519s" podCreationTimestamp="2026-01-24 00:54:42 +0000 UTC" firstStartedPulling="2026-01-24 00:54:45.553748887 +0000 UTC m=+4.331670445" lastFinishedPulling="2026-01-24 00:54:48.425306811 +0000 UTC m=+7.203228371" observedRunningTime="2026-01-24 00:54:49.60498547 +0000 UTC m=+8.382907029" watchObservedRunningTime="2026-01-24 00:54:49.605670519 +0000 UTC m=+8.383592088" Jan 24 00:54:50.311722 kubelet[1765]: E0124 00:54:50.311199 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:50.688817 kubelet[1765]: E0124 00:54:50.688599 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:51.314909 kubelet[1765]: E0124 00:54:51.314019 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:51.433231 kubelet[1765]: E0124 00:54:51.433129 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:52.369417 kubelet[1765]: E0124 00:54:52.369094 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:52.821431 containerd[1453]: time="2026-01-24T00:54:52.821312683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:52.822494 containerd[1453]: time="2026-01-24T00:54:52.822394966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:54:52.823594 containerd[1453]: time="2026-01-24T00:54:52.823453949Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:52.827010 containerd[1453]: time="2026-01-24T00:54:52.826934328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:52.827971 containerd[1453]: time="2026-01-24T00:54:52.827905972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.402221789s" Jan 24 00:54:52.827971 containerd[1453]: time="2026-01-24T00:54:52.827950054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:54:52.835072 containerd[1453]: time="2026-01-24T00:54:52.834976911Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:54:52.854981 containerd[1453]: time="2026-01-24T00:54:52.854915758Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd\"" Jan 24 00:54:52.856125 containerd[1453]: time="2026-01-24T00:54:52.856080231Z" level=info msg="StartContainer for \"c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd\"" Jan 24 00:54:52.911766 systemd[1]: Started cri-containerd-c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd.scope - libcontainer container c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd. Jan 24 00:54:52.959233 containerd[1453]: time="2026-01-24T00:54:52.959142409Z" level=info msg="StartContainer for \"c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd\" returns successfully" Jan 24 00:54:53.369911 kubelet[1765]: E0124 00:54:53.369801 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:53.433680 kubelet[1765]: E0124 00:54:53.433491 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:53.610529 kubelet[1765]: E0124 00:54:53.610469 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:53.761011 systemd[1]: cri-containerd-c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd.scope: Deactivated successfully. Jan 24 00:54:53.761440 systemd[1]: cri-containerd-c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd.scope: Consumed 1.084s CPU time. Jan 24 00:54:53.784102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd-rootfs.mount: Deactivated successfully. Jan 24 00:54:53.821634 kubelet[1765]: I0124 00:54:53.821475 1765 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:54:53.891575 containerd[1453]: time="2026-01-24T00:54:53.891431662Z" level=info msg="shim disconnected" id=c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd namespace=k8s.io Jan 24 00:54:53.891575 containerd[1453]: time="2026-01-24T00:54:53.891515328Z" level=warning msg="cleaning up after shim disconnected" id=c08d6662a4799491b87ca296ea4a49eed4edbaf2e158750e6da70b8df4afd6cd namespace=k8s.io Jan 24 00:54:53.892083 containerd[1453]: time="2026-01-24T00:54:53.891532530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:54.370111 kubelet[1765]: E0124 00:54:54.370007 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:54.615694 kubelet[1765]: E0124 00:54:54.615651 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:54.617325 containerd[1453]: time="2026-01-24T00:54:54.616910806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:54:55.370959 kubelet[1765]: E0124 00:54:55.370814 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:55.440901 systemd[1]: Created slice kubepods-besteffort-pod0599f80c_e14c_4f92_8838_e34d8d6742dd.slice - libcontainer container kubepods-besteffort-pod0599f80c_e14c_4f92_8838_e34d8d6742dd.slice. Jan 24 00:54:55.447149 containerd[1453]: time="2026-01-24T00:54:55.447037933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j86w,Uid:0599f80c-e14c-4f92-8838-e34d8d6742dd,Namespace:calico-system,Attempt:0,}" Jan 24 00:54:55.537649 containerd[1453]: time="2026-01-24T00:54:55.537131815Z" level=error msg="Failed to destroy network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:55.539620 containerd[1453]: time="2026-01-24T00:54:55.538774277Z" level=error msg="encountered an error cleaning up failed sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:55.539609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35-shm.mount: Deactivated successfully. Jan 24 00:54:55.540907 containerd[1453]: time="2026-01-24T00:54:55.540261202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j86w,Uid:0599f80c-e14c-4f92-8838-e34d8d6742dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:55.541276 kubelet[1765]: E0124 00:54:55.541219 1765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:55.541382 kubelet[1765]: E0124 00:54:55.541324 1765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:55.541435 kubelet[1765]: E0124 00:54:55.541383 1765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6j86w" Jan 24 00:54:55.541522 kubelet[1765]: E0124 00:54:55.541456 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6j86w_calico-system(0599f80c-e14c-4f92-8838-e34d8d6742dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6j86w_calico-system(0599f80c-e14c-4f92-8838-e34d8d6742dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:55.626091 kubelet[1765]: I0124 00:54:55.625881 1765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:54:55.629216 containerd[1453]: time="2026-01-24T00:54:55.628931078Z" level=info msg="StopPodSandbox for \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\"" Jan 24 00:54:55.629496 containerd[1453]: time="2026-01-24T00:54:55.629256204Z" level=info msg="Ensure that sandbox f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35 in task-service has been cleanup successfully" Jan 24 00:54:55.669112 containerd[1453]: time="2026-01-24T00:54:55.668832738Z" level=error msg="StopPodSandbox for \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\" failed" error="failed to destroy network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:54:55.669478 kubelet[1765]: E0124 00:54:55.669377 1765 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:54:55.669638 kubelet[1765]: E0124 00:54:55.669497 1765 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35"} Jan 24 00:54:55.669638 kubelet[1765]: E0124 00:54:55.669628 1765 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0599f80c-e14c-4f92-8838-e34d8d6742dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:54:55.669779 kubelet[1765]: E0124 00:54:55.669673 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0599f80c-e14c-4f92-8838-e34d8d6742dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:54:56.371345 kubelet[1765]: E0124 00:54:56.371307 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:57.371848 kubelet[1765]: E0124 00:54:57.371727 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:58.029961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount406961697.mount: Deactivated successfully. Jan 24 00:54:58.285918 containerd[1453]: time="2026-01-24T00:54:58.285694177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:58.286741 containerd[1453]: time="2026-01-24T00:54:58.286667843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:54:58.288173 containerd[1453]: time="2026-01-24T00:54:58.288097531Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:58.290800 containerd[1453]: time="2026-01-24T00:54:58.290731620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:54:58.291308 containerd[1453]: time="2026-01-24T00:54:58.291248064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.674287816s" Jan 24 00:54:58.291308 containerd[1453]: time="2026-01-24T00:54:58.291293828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:54:58.304694 containerd[1453]: time="2026-01-24T00:54:58.304619325Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:54:58.323409 containerd[1453]: time="2026-01-24T00:54:58.323321649Z" level=info msg="CreateContainer within sandbox \"434820f63919083a314c0b444effafc408e39d2c58c4783fda9fbc17158c157d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0561c266dea45e928b57a0fad95308a4e919988f709388a9ce8e456ce6b2bb7c\"" Jan 24 00:54:58.324251 containerd[1453]: time="2026-01-24T00:54:58.324223387Z" level=info msg="StartContainer for \"0561c266dea45e928b57a0fad95308a4e919988f709388a9ce8e456ce6b2bb7c\"" Jan 24 00:54:58.365832 systemd[1]: Started cri-containerd-0561c266dea45e928b57a0fad95308a4e919988f709388a9ce8e456ce6b2bb7c.scope - libcontainer container 0561c266dea45e928b57a0fad95308a4e919988f709388a9ce8e456ce6b2bb7c. Jan 24 00:54:58.373081 kubelet[1765]: E0124 00:54:58.372971 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:58.403821 containerd[1453]: time="2026-01-24T00:54:58.403775766Z" level=info msg="StartContainer for \"0561c266dea45e928b57a0fad95308a4e919988f709388a9ce8e456ce6b2bb7c\" returns successfully" Jan 24 00:54:58.534424 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:54:58.534627 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:54:58.639055 kubelet[1765]: E0124 00:54:58.636501 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:58.656471 kubelet[1765]: I0124 00:54:58.655916 1765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mb9tg" podStartSLOduration=3.906545931 podStartE2EDuration="16.655886642s" podCreationTimestamp="2026-01-24 00:54:42 +0000 UTC" firstStartedPulling="2026-01-24 00:54:45.542891102 +0000 UTC m=+4.320812661" lastFinishedPulling="2026-01-24 00:54:58.292231813 +0000 UTC m=+17.070153372" observedRunningTime="2026-01-24 00:54:58.655773819 +0000 UTC m=+17.433695388" watchObservedRunningTime="2026-01-24 00:54:58.655886642 +0000 UTC m=+17.433808221" Jan 24 00:54:58.960249 systemd[1]: Created slice kubepods-besteffort-pod22da69e7_51cf_4c75_8af5_d73bf0580ff6.slice - libcontainer container kubepods-besteffort-pod22da69e7_51cf_4c75_8af5_d73bf0580ff6.slice. Jan 24 00:54:59.048640 kubelet[1765]: I0124 00:54:59.048470 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhjl\" (UniqueName: \"kubernetes.io/projected/22da69e7-51cf-4c75-8af5-d73bf0580ff6-kube-api-access-4hhjl\") pod \"nginx-deployment-bb8f74bfb-l9l4l\" (UID: \"22da69e7-51cf-4c75-8af5-d73bf0580ff6\") " pod="default/nginx-deployment-bb8f74bfb-l9l4l" Jan 24 00:54:59.272955 containerd[1453]: time="2026-01-24T00:54:59.272772852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-l9l4l,Uid:22da69e7-51cf-4c75-8af5-d73bf0580ff6,Namespace:default,Attempt:0,}" Jan 24 00:54:59.373877 kubelet[1765]: E0124 00:54:59.373757 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:54:59.426789 systemd-networkd[1395]: cali534f0215d1d: Link UP Jan 24 00:54:59.427475 systemd-networkd[1395]: cali534f0215d1d: Gained carrier Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.311 [INFO][2374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.327 [INFO][2374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0 nginx-deployment-bb8f74bfb- default 22da69e7-51cf-4c75-8af5-d73bf0580ff6 1295 0 2026-01-24 00:54:58 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.107 nginx-deployment-bb8f74bfb-l9l4l eth0 default [] [] [kns.default ksa.default.default] cali534f0215d1d [] [] }} ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.327 [INFO][2374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.367 [INFO][2389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" HandleID="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Workload="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.368 [INFO][2389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" HandleID="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Workload="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7080), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.107", "pod":"nginx-deployment-bb8f74bfb-l9l4l", "timestamp":"2026-01-24 00:54:59.367516042 +0000 UTC"}, Hostname:"10.0.0.107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.368 [INFO][2389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.368 [INFO][2389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.368 [INFO][2389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.107' Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.379 [INFO][2389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.385 [INFO][2389] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.392 [INFO][2389] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.395 [INFO][2389] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.398 [INFO][2389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.398 [INFO][2389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.400 [INFO][2389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.405 [INFO][2389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.411 [INFO][2389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.1/26] block=192.168.117.0/26 handle="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.411 [INFO][2389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.1/26] handle="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" host="10.0.0.107" Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.411 [INFO][2389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:54:59.438227 containerd[1453]: 2026-01-24 00:54:59.411 [INFO][2389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.1/26] IPv6=[] ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" HandleID="k8s-pod-network.82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Workload="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.417 [INFO][2374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"22da69e7-51cf-4c75-8af5-d73bf0580ff6", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-l9l4l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali534f0215d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.417 [INFO][2374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.1/32] ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.417 [INFO][2374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali534f0215d1d ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.427 [INFO][2374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.427 [INFO][2374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"22da69e7-51cf-4c75-8af5-d73bf0580ff6", ResourceVersion:"1295", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d", Pod:"nginx-deployment-bb8f74bfb-l9l4l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali534f0215d1d", MAC:"7a:cf:fc:6a:95:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:54:59.439314 containerd[1453]: 2026-01-24 00:54:59.435 [INFO][2374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d" Namespace="default" Pod="nginx-deployment-bb8f74bfb-l9l4l" WorkloadEndpoint="10.0.0.107-k8s-nginx--deployment--bb8f74bfb--l9l4l-eth0" Jan 24 00:54:59.468244 containerd[1453]: time="2026-01-24T00:54:59.466933216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:59.468244 containerd[1453]: time="2026-01-24T00:54:59.468099752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:59.468244 containerd[1453]: time="2026-01-24T00:54:59.468112445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:59.468478 containerd[1453]: time="2026-01-24T00:54:59.468201579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:59.497722 systemd[1]: Started cri-containerd-82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d.scope - libcontainer container 82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d. Jan 24 00:54:59.509737 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:54:59.536118 containerd[1453]: time="2026-01-24T00:54:59.535928759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-l9l4l,Uid:22da69e7-51cf-4c75-8af5-d73bf0580ff6,Namespace:default,Attempt:0,} returns sandbox id \"82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d\"" Jan 24 00:54:59.537736 containerd[1453]: time="2026-01-24T00:54:59.537534545Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:54:59.642657 kubelet[1765]: E0124 00:54:59.640247 1765 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:55:00.196611 kernel: bpftool[2602]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:55:00.375039 kubelet[1765]: E0124 00:55:00.374679 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:00.485940 systemd-networkd[1395]: vxlan.calico: Link UP Jan 24 00:55:00.486004 systemd-networkd[1395]: vxlan.calico: Gained carrier Jan 24 00:55:00.662791 systemd-networkd[1395]: cali534f0215d1d: Gained IPv6LL Jan 24 00:55:01.274487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281108954.mount: Deactivated successfully. Jan 24 00:55:01.375619 kubelet[1765]: E0124 00:55:01.375570 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:02.263295 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Jan 24 00:55:02.272717 containerd[1453]: time="2026-01-24T00:55:02.272619625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:02.273903 containerd[1453]: time="2026-01-24T00:55:02.273784114Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 24 00:55:02.275458 containerd[1453]: time="2026-01-24T00:55:02.275381077Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:02.278794 containerd[1453]: time="2026-01-24T00:55:02.278729273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:02.280725 containerd[1453]: time="2026-01-24T00:55:02.280618699Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 2.743003604s" Jan 24 00:55:02.280725 containerd[1453]: time="2026-01-24T00:55:02.280700269Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:55:02.284438 kubelet[1765]: E0124 00:55:02.284312 1765 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:02.287245 containerd[1453]: time="2026-01-24T00:55:02.287194764Z" level=info msg="CreateContainer within sandbox \"82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 24 00:55:02.301078 containerd[1453]: time="2026-01-24T00:55:02.300923933Z" level=info msg="CreateContainer within sandbox \"82e88e2ad8617ac96d34dfc626dfc71b73de3923a020489e50032c7fef8ce02d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f9dbe8c221665acb80164edd1e8d0639262a2936a5e220e36679ec25f2f213fa\"" Jan 24 00:55:02.301749 containerd[1453]: time="2026-01-24T00:55:02.301703681Z" level=info msg="StartContainer for \"f9dbe8c221665acb80164edd1e8d0639262a2936a5e220e36679ec25f2f213fa\"" Jan 24 00:55:02.378175 kubelet[1765]: E0124 00:55:02.376144 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:02.397825 systemd[1]: Started cri-containerd-f9dbe8c221665acb80164edd1e8d0639262a2936a5e220e36679ec25f2f213fa.scope - libcontainer container f9dbe8c221665acb80164edd1e8d0639262a2936a5e220e36679ec25f2f213fa. Jan 24 00:55:02.502570 containerd[1453]: time="2026-01-24T00:55:02.502421719Z" level=info msg="StartContainer for \"f9dbe8c221665acb80164edd1e8d0639262a2936a5e220e36679ec25f2f213fa\" returns successfully" Jan 24 00:55:03.376814 kubelet[1765]: E0124 00:55:03.376711 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:04.377274 kubelet[1765]: E0124 00:55:04.377175 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:05.378428 kubelet[1765]: E0124 00:55:05.378286 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:05.536202 kubelet[1765]: I0124 00:55:05.536078 1765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-l9l4l" podStartSLOduration=4.790794066 podStartE2EDuration="7.536049615s" podCreationTimestamp="2026-01-24 00:54:58 +0000 UTC" firstStartedPulling="2026-01-24 00:54:59.537218923 +0000 UTC m=+18.315140482" lastFinishedPulling="2026-01-24 00:55:02.282474473 +0000 UTC m=+21.060396031" observedRunningTime="2026-01-24 00:55:02.6630316 +0000 UTC m=+21.440953190" watchObservedRunningTime="2026-01-24 00:55:05.536049615 +0000 UTC m=+24.313971174" Jan 24 00:55:05.548289 systemd[1]: Created slice kubepods-besteffort-pod7eb0cc0a_417e_4e5c_b5c5_df92e03506ab.slice - libcontainer container kubepods-besteffort-pod7eb0cc0a_417e_4e5c_b5c5_df92e03506ab.slice. Jan 24 00:55:05.599800 kubelet[1765]: I0124 00:55:05.599706 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jghp\" (UniqueName: \"kubernetes.io/projected/7eb0cc0a-417e-4e5c-b5c5-df92e03506ab-kube-api-access-2jghp\") pod \"nfs-server-provisioner-0\" (UID: \"7eb0cc0a-417e-4e5c-b5c5-df92e03506ab\") " pod="default/nfs-server-provisioner-0" Jan 24 00:55:05.599988 kubelet[1765]: I0124 00:55:05.599815 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7eb0cc0a-417e-4e5c-b5c5-df92e03506ab-data\") pod \"nfs-server-provisioner-0\" (UID: \"7eb0cc0a-417e-4e5c-b5c5-df92e03506ab\") " pod="default/nfs-server-provisioner-0" Jan 24 00:55:05.856447 containerd[1453]: time="2026-01-24T00:55:05.856365351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7eb0cc0a-417e-4e5c-b5c5-df92e03506ab,Namespace:default,Attempt:0,}" Jan 24 00:55:06.033650 systemd-networkd[1395]: cali60e51b789ff: Link UP Jan 24 00:55:06.034597 systemd-networkd[1395]: cali60e51b789ff: Gained carrier Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.928 [INFO][2773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.107-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7eb0cc0a-417e-4e5c-b5c5-df92e03506ab 1362 0 2026-01-24 00:55:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.107 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.929 [INFO][2773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.974 [INFO][2787] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" HandleID="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Workload="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.974 [INFO][2787] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" HandleID="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Workload="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b43d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.107", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-24 00:55:05.97416894 +0000 UTC"}, Hostname:"10.0.0.107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.974 [INFO][2787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.974 [INFO][2787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.974 [INFO][2787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.107' Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.986 [INFO][2787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:05.995 [INFO][2787] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.003 [INFO][2787] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.006 [INFO][2787] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.009 [INFO][2787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.009 [INFO][2787] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.011 [INFO][2787] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.017 [INFO][2787] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.024 [INFO][2787] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.2/26] block=192.168.117.0/26 handle="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.024 [INFO][2787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.2/26] handle="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" host="10.0.0.107" Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.024 [INFO][2787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:55:06.053930 containerd[1453]: 2026-01-24 00:55:06.024 [INFO][2787] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.2/26] IPv6=[] ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" HandleID="k8s-pod-network.3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Workload="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.054747 containerd[1453]: 2026-01-24 00:55:06.028 [INFO][2773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7eb0cc0a-417e-4e5c-b5c5-df92e03506ab", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:06.054747 containerd[1453]: 2026-01-24 00:55:06.028 [INFO][2773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.2/32] ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.054747 containerd[1453]: 2026-01-24 00:55:06.028 [INFO][2773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.054747 containerd[1453]: 2026-01-24 00:55:06.035 [INFO][2773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.054969 containerd[1453]: 2026-01-24 00:55:06.036 [INFO][2773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7eb0cc0a-417e-4e5c-b5c5-df92e03506ab", ResourceVersion:"1362", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0a:53:10:3e:17:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:06.054969 containerd[1453]: 2026-01-24 00:55:06.051 [INFO][2773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.107-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:55:06.085390 containerd[1453]: time="2026-01-24T00:55:06.085215465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:55:06.085390 containerd[1453]: time="2026-01-24T00:55:06.085340356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:55:06.085785 containerd[1453]: time="2026-01-24T00:55:06.085393284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:06.085785 containerd[1453]: time="2026-01-24T00:55:06.085475727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:06.115879 systemd[1]: Started cri-containerd-3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c.scope - libcontainer container 3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c. Jan 24 00:55:06.134218 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:55:06.171136 containerd[1453]: time="2026-01-24T00:55:06.171037486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7eb0cc0a-417e-4e5c-b5c5-df92e03506ab,Namespace:default,Attempt:0,} returns sandbox id \"3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c\"" Jan 24 00:55:06.173455 containerd[1453]: time="2026-01-24T00:55:06.173315620Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 24 00:55:06.378680 kubelet[1765]: E0124 00:55:06.378447 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:07.379277 kubelet[1765]: E0124 00:55:07.379209 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:07.959825 systemd-networkd[1395]: cali60e51b789ff: Gained IPv6LL Jan 24 00:55:08.264621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409531983.mount: Deactivated successfully. Jan 24 00:55:08.379797 kubelet[1765]: E0124 00:55:08.379743 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:08.955057 update_engine[1438]: I20260124 00:55:08.954979 1438 update_attempter.cc:509] Updating boot flags... Jan 24 00:55:09.012705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2865) Jan 24 00:55:09.382668 kubelet[1765]: E0124 00:55:09.380134 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:09.433660 containerd[1453]: time="2026-01-24T00:55:09.433532678Z" level=info msg="StopPodSandbox for \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\"" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.507 [INFO][2882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.507 [INFO][2882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" iface="eth0" netns="/var/run/netns/cni-73bc5e10-a778-81ec-1b8f-f49c10c16ab8" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.507 [INFO][2882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" iface="eth0" netns="/var/run/netns/cni-73bc5e10-a778-81ec-1b8f-f49c10c16ab8" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.508 [INFO][2882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" iface="eth0" netns="/var/run/netns/cni-73bc5e10-a778-81ec-1b8f-f49c10c16ab8" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.508 [INFO][2882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.508 [INFO][2882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.542 [INFO][2892] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" HandleID="k8s-pod-network.f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.543 [INFO][2892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.543 [INFO][2892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.552 [WARNING][2892] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" HandleID="k8s-pod-network.f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.552 [INFO][2892] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" HandleID="k8s-pod-network.f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.554 [INFO][2892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:55:09.560645 containerd[1453]: 2026-01-24 00:55:09.557 [INFO][2882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35" Jan 24 00:55:09.561830 containerd[1453]: time="2026-01-24T00:55:09.561745247Z" level=info msg="TearDown network for sandbox \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\" successfully" Jan 24 00:55:09.561830 containerd[1453]: time="2026-01-24T00:55:09.561808785Z" level=info msg="StopPodSandbox for \"f5e298311cdb224f4c5f821b09dfe2e59ee476ceb821c156dddafeb10b0d0e35\" returns successfully" Jan 24 00:55:09.563367 systemd[1]: run-netns-cni\x2d73bc5e10\x2da778\x2d81ec\x2d1b8f\x2df49c10c16ab8.mount: Deactivated successfully. Jan 24 00:55:09.578953 containerd[1453]: time="2026-01-24T00:55:09.578866021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j86w,Uid:0599f80c-e14c-4f92-8838-e34d8d6742dd,Namespace:calico-system,Attempt:1,}" Jan 24 00:55:09.737465 systemd-networkd[1395]: cali6ecb67f8634: Link UP Jan 24 00:55:09.737843 systemd-networkd[1395]: cali6ecb67f8634: Gained carrier Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.645 [INFO][2899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.107-k8s-csi--node--driver--6j86w-eth0 csi-node-driver- calico-system 0599f80c-e14c-4f92-8838-e34d8d6742dd 1393 0 2026-01-24 00:54:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.107 csi-node-driver-6j86w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6ecb67f8634 [] [] }} ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.645 [INFO][2899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.682 [INFO][2914] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" HandleID="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.682 [INFO][2914] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" HandleID="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.107", "pod":"csi-node-driver-6j86w", "timestamp":"2026-01-24 00:55:09.682698176 +0000 UTC"}, Hostname:"10.0.0.107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.682 [INFO][2914] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.683 [INFO][2914] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.683 [INFO][2914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.107' Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.691 [INFO][2914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.698 [INFO][2914] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.705 [INFO][2914] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.711 [INFO][2914] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.714 [INFO][2914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.714 [INFO][2914] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.716 [INFO][2914] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626 Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.723 [INFO][2914] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.730 [INFO][2914] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.3/26] block=192.168.117.0/26 handle="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.730 [INFO][2914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.3/26] handle="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" host="10.0.0.107" Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.730 [INFO][2914] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:55:09.752792 containerd[1453]: 2026-01-24 00:55:09.730 [INFO][2914] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.3/26] IPv6=[] ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" HandleID="k8s-pod-network.acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Workload="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.733 [INFO][2899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-csi--node--driver--6j86w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0599f80c-e14c-4f92-8838-e34d8d6742dd", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"", Pod:"csi-node-driver-6j86w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecb67f8634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.733 [INFO][2899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.3/32] ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.733 [INFO][2899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ecb67f8634 ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.736 [INFO][2899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.737 [INFO][2899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-csi--node--driver--6j86w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0599f80c-e14c-4f92-8838-e34d8d6742dd", ResourceVersion:"1393", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626", Pod:"csi-node-driver-6j86w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ecb67f8634", MAC:"ee:1c:44:2a:57:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:09.753759 containerd[1453]: 2026-01-24 00:55:09.750 [INFO][2899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626" Namespace="calico-system" Pod="csi-node-driver-6j86w" WorkloadEndpoint="10.0.0.107-k8s-csi--node--driver--6j86w-eth0" Jan 24 00:55:09.784105 containerd[1453]: time="2026-01-24T00:55:09.783653919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:55:09.784105 containerd[1453]: time="2026-01-24T00:55:09.783737975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:55:09.784105 containerd[1453]: time="2026-01-24T00:55:09.783749577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:09.784105 containerd[1453]: time="2026-01-24T00:55:09.783854180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:09.821756 systemd[1]: Started cri-containerd-acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626.scope - libcontainer container acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626. Jan 24 00:55:09.835840 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:55:09.856279 containerd[1453]: time="2026-01-24T00:55:09.856201550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6j86w,Uid:0599f80c-e14c-4f92-8838-e34d8d6742dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"acfada9136ac457d8ccabf70d44499eacb36a3bce2c984cb7c861bcb5c320626\"" Jan 24 00:55:10.271294 containerd[1453]: time="2026-01-24T00:55:10.271166526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:10.272482 containerd[1453]: time="2026-01-24T00:55:10.272380112Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 24 00:55:10.273731 containerd[1453]: time="2026-01-24T00:55:10.273674621Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:10.280753 containerd[1453]: time="2026-01-24T00:55:10.280500308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:10.282457 containerd[1453]: time="2026-01-24T00:55:10.282352698Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.108984781s" Jan 24 00:55:10.282457 containerd[1453]: time="2026-01-24T00:55:10.282438046Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 24 00:55:10.284258 containerd[1453]: time="2026-01-24T00:55:10.283854826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:55:10.288113 containerd[1453]: time="2026-01-24T00:55:10.288060103Z" level=info msg="CreateContainer within sandbox \"3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 24 00:55:10.305096 containerd[1453]: time="2026-01-24T00:55:10.305021568Z" level=info msg="CreateContainer within sandbox \"3aa8bec3bad6735e7b8bb2315d18d53e45fe76aebd38486000cea65c26cb3f3c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2c50dd8c20436617884862d16626942f7d437edec3fed129abe99204e74cbc36\"" Jan 24 00:55:10.305760 containerd[1453]: time="2026-01-24T00:55:10.305731225Z" level=info msg="StartContainer for \"2c50dd8c20436617884862d16626942f7d437edec3fed129abe99204e74cbc36\"" Jan 24 00:55:10.343630 containerd[1453]: time="2026-01-24T00:55:10.343516454Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:10.345851 systemd[1]: Started cri-containerd-2c50dd8c20436617884862d16626942f7d437edec3fed129abe99204e74cbc36.scope - libcontainer container 2c50dd8c20436617884862d16626942f7d437edec3fed129abe99204e74cbc36. Jan 24 00:55:10.375633 containerd[1453]: time="2026-01-24T00:55:10.375577042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:55:10.376026 containerd[1453]: time="2026-01-24T00:55:10.375814713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:55:10.378517 kubelet[1765]: E0124 00:55:10.378439 1765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:55:10.378690 kubelet[1765]: E0124 00:55:10.378630 1765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:55:10.378796 kubelet[1765]: E0124 00:55:10.378763 1765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-6j86w_calico-system(0599f80c-e14c-4f92-8838-e34d8d6742dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:10.380535 kubelet[1765]: E0124 00:55:10.380364 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:10.381824 containerd[1453]: time="2026-01-24T00:55:10.381485816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:55:10.397082 containerd[1453]: time="2026-01-24T00:55:10.396922526Z" level=info msg="StartContainer for \"2c50dd8c20436617884862d16626942f7d437edec3fed129abe99204e74cbc36\" returns successfully" Jan 24 00:55:10.459607 containerd[1453]: time="2026-01-24T00:55:10.459486722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:55:10.493182 containerd[1453]: time="2026-01-24T00:55:10.493090456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:55:10.493388 containerd[1453]: time="2026-01-24T00:55:10.493137286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:55:10.493533 kubelet[1765]: E0124 00:55:10.493479 1765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:55:10.494051 kubelet[1765]: E0124 00:55:10.493587 1765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:55:10.494051 kubelet[1765]: E0124 00:55:10.493695 1765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-6j86w_calico-system(0599f80c-e14c-4f92-8838-e34d8d6742dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:55:10.494051 kubelet[1765]: E0124 00:55:10.493734 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:55:10.678187 kubelet[1765]: E0124 00:55:10.678047 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:55:10.688276 kubelet[1765]: I0124 00:55:10.688189 1765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.577318232 podStartE2EDuration="5.688177033s" podCreationTimestamp="2026-01-24 00:55:05 +0000 UTC" firstStartedPulling="2026-01-24 00:55:06.172843326 +0000 UTC m=+24.950764886" lastFinishedPulling="2026-01-24 00:55:10.283702128 +0000 UTC m=+29.061623687" observedRunningTime="2026-01-24 00:55:10.687602754 +0000 UTC m=+29.465524462" watchObservedRunningTime="2026-01-24 00:55:10.688177033 +0000 UTC m=+29.466098592" Jan 24 00:55:11.030911 systemd-networkd[1395]: cali6ecb67f8634: Gained IPv6LL Jan 24 00:55:11.381982 kubelet[1765]: E0124 00:55:11.381706 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:11.680823 kubelet[1765]: E0124 00:55:11.680717 1765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6j86w" podUID="0599f80c-e14c-4f92-8838-e34d8d6742dd" Jan 24 00:55:12.383135 kubelet[1765]: E0124 00:55:12.382992 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:13.384161 kubelet[1765]: E0124 00:55:13.384061 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:14.384438 kubelet[1765]: E0124 00:55:14.384285 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:15.384575 kubelet[1765]: E0124 00:55:15.384453 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:15.678457 systemd[1]: Created slice kubepods-besteffort-pod6edb3ee7_494b_4aba_822f_f4744cbd1a83.slice - libcontainer container kubepods-besteffort-pod6edb3ee7_494b_4aba_822f_f4744cbd1a83.slice. Jan 24 00:55:15.780989 kubelet[1765]: I0124 00:55:15.780749 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g925\" (UniqueName: \"kubernetes.io/projected/6edb3ee7-494b-4aba-822f-f4744cbd1a83-kube-api-access-5g925\") pod \"test-pod-1\" (UID: \"6edb3ee7-494b-4aba-822f-f4744cbd1a83\") " pod="default/test-pod-1" Jan 24 00:55:15.780989 kubelet[1765]: I0124 00:55:15.780897 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc6cf98c-1edc-4a8a-a62b-03b336b596a1\" (UniqueName: \"kubernetes.io/nfs/6edb3ee7-494b-4aba-822f-f4744cbd1a83-pvc-bc6cf98c-1edc-4a8a-a62b-03b336b596a1\") pod \"test-pod-1\" (UID: \"6edb3ee7-494b-4aba-822f-f4744cbd1a83\") " pod="default/test-pod-1" Jan 24 00:55:15.922622 kernel: FS-Cache: Loaded Jan 24 00:55:16.009607 kernel: RPC: Registered named UNIX socket transport module. Jan 24 00:55:16.009756 kernel: RPC: Registered udp transport module. Jan 24 00:55:16.009800 kernel: RPC: Registered tcp transport module. Jan 24 00:55:16.011429 kernel: RPC: Registered tcp-with-tls transport module. Jan 24 00:55:16.013283 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 24 00:55:16.290887 kernel: NFS: Registering the id_resolver key type Jan 24 00:55:16.290998 kernel: Key type id_resolver registered Jan 24 00:55:16.291016 kernel: Key type id_legacy registered Jan 24 00:55:16.333231 nfsidmap[3090]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 00:55:16.339836 nfsidmap[3093]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 24 00:55:16.385824 kubelet[1765]: E0124 00:55:16.385758 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:16.585718 containerd[1453]: time="2026-01-24T00:55:16.585462005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6edb3ee7-494b-4aba-822f-f4744cbd1a83,Namespace:default,Attempt:0,}" Jan 24 00:55:16.717356 systemd-networkd[1395]: cali5ec59c6bf6e: Link UP Jan 24 00:55:16.719472 systemd-networkd[1395]: cali5ec59c6bf6e: Gained carrier Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.633 [INFO][3096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.107-k8s-test--pod--1-eth0 default 6edb3ee7-494b-4aba-822f-f4744cbd1a83 1454 0 2026-01-24 00:55:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.107 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.633 [INFO][3096] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.668 [INFO][3110] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" HandleID="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Workload="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.668 [INFO][3110] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" HandleID="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Workload="10.0.0.107-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.107", "pod":"test-pod-1", "timestamp":"2026-01-24 00:55:16.668280894 +0000 UTC"}, Hostname:"10.0.0.107", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.668 [INFO][3110] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.668 [INFO][3110] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.668 [INFO][3110] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.107' Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.678 [INFO][3110] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.685 [INFO][3110] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.691 [INFO][3110] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.693 [INFO][3110] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.696 [INFO][3110] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.696 [INFO][3110] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.700 [INFO][3110] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242 Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.706 [INFO][3110] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.712 [INFO][3110] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.4/26] block=192.168.117.0/26 handle="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.712 [INFO][3110] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.4/26] handle="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" host="10.0.0.107" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.712 [INFO][3110] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.712 [INFO][3110] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.4/26] IPv6=[] ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" HandleID="k8s-pod-network.e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Workload="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.730239 containerd[1453]: 2026-01-24 00:55:16.714 [INFO][3096] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6edb3ee7-494b-4aba-822f-f4744cbd1a83", ResourceVersion:"1454", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:16.731306 containerd[1453]: 2026-01-24 00:55:16.715 [INFO][3096] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.4/32] ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.731306 containerd[1453]: 2026-01-24 00:55:16.715 [INFO][3096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.731306 containerd[1453]: 2026-01-24 00:55:16.718 [INFO][3096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.731306 containerd[1453]: 2026-01-24 00:55:16.719 [INFO][3096] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.107-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"6edb3ee7-494b-4aba-822f-f4744cbd1a83", ResourceVersion:"1454", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.107", ContainerID:"e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d2:d6:ec:cf:ba:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:55:16.731306 containerd[1453]: 2026-01-24 00:55:16.726 [INFO][3096] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.107-k8s-test--pod--1-eth0" Jan 24 00:55:16.762371 containerd[1453]: time="2026-01-24T00:55:16.762161855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:55:16.762371 containerd[1453]: time="2026-01-24T00:55:16.762256881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:55:16.762371 containerd[1453]: time="2026-01-24T00:55:16.762273492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:16.762641 containerd[1453]: time="2026-01-24T00:55:16.762421507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:55:16.784713 systemd[1]: Started cri-containerd-e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242.scope - libcontainer container e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242. Jan 24 00:55:16.799897 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:55:16.827323 containerd[1453]: time="2026-01-24T00:55:16.827255561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6edb3ee7-494b-4aba-822f-f4744cbd1a83,Namespace:default,Attempt:0,} returns sandbox id \"e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242\"" Jan 24 00:55:16.829083 containerd[1453]: time="2026-01-24T00:55:16.828834854Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:55:16.923983 containerd[1453]: time="2026-01-24T00:55:16.923851239Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:55:16.925027 containerd[1453]: time="2026-01-24T00:55:16.924915000Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 24 00:55:16.929848 containerd[1453]: time="2026-01-24T00:55:16.929785050Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 100.882059ms" Jan 24 00:55:16.929948 containerd[1453]: time="2026-01-24T00:55:16.929845182Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:55:16.935342 containerd[1453]: time="2026-01-24T00:55:16.935305585Z" level=info msg="CreateContainer within sandbox \"e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 24 00:55:16.952464 containerd[1453]: time="2026-01-24T00:55:16.952418211Z" level=info msg="CreateContainer within sandbox \"e181dbbaa70b7d7a5822e99ccd9aa6e4dec6abc6139568b7315f958f461de242\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f3f64abdf3b486b8fd848ab286bc03ce7d7ec374e7df3ffe904b9b798f0a0676\"" Jan 24 00:55:16.953461 containerd[1453]: time="2026-01-24T00:55:16.953339312Z" level=info msg="StartContainer for \"f3f64abdf3b486b8fd848ab286bc03ce7d7ec374e7df3ffe904b9b798f0a0676\"" Jan 24 00:55:16.989782 systemd[1]: Started cri-containerd-f3f64abdf3b486b8fd848ab286bc03ce7d7ec374e7df3ffe904b9b798f0a0676.scope - libcontainer container f3f64abdf3b486b8fd848ab286bc03ce7d7ec374e7df3ffe904b9b798f0a0676. Jan 24 00:55:17.019989 containerd[1453]: time="2026-01-24T00:55:17.019940809Z" level=info msg="StartContainer for \"f3f64abdf3b486b8fd848ab286bc03ce7d7ec374e7df3ffe904b9b798f0a0676\" returns successfully" Jan 24 00:55:17.386412 kubelet[1765]: E0124 00:55:17.386232 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:17.708513 kubelet[1765]: I0124 00:55:17.708387 1765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.606198916 podStartE2EDuration="12.708366112s" podCreationTimestamp="2026-01-24 00:55:05 +0000 UTC" firstStartedPulling="2026-01-24 00:55:16.828583929 +0000 UTC m=+35.606505488" lastFinishedPulling="2026-01-24 00:55:16.930751124 +0000 UTC m=+35.708672684" observedRunningTime="2026-01-24 00:55:17.708156823 +0000 UTC m=+36.486078403" watchObservedRunningTime="2026-01-24 00:55:17.708366112 +0000 UTC m=+36.486287672" Jan 24 00:55:18.386758 kubelet[1765]: E0124 00:55:18.386639 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:18.779769 systemd-networkd[1395]: cali5ec59c6bf6e: Gained IPv6LL Jan 24 00:55:19.387382 kubelet[1765]: E0124 00:55:19.387276 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:20.388251 kubelet[1765]: E0124 00:55:20.388125 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:21.388748 kubelet[1765]: E0124 00:55:21.388645 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:22.284987 kubelet[1765]: E0124 00:55:22.284881 1765 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:22.389465 kubelet[1765]: E0124 00:55:22.389380 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:23.390438 kubelet[1765]: E0124 00:55:23.390350 1765 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"