Jan 20 00:53:54.045869 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:53:54.045889 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:53:54.045899 kernel: BIOS-provided physical RAM map: Jan 20 00:53:54.045905 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:53:54.045910 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:53:54.045916 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:53:54.045922 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:53:54.045928 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:53:54.045933 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:53:54.045941 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:53:54.045976 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:53:54.045982 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:53:54.045988 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:53:54.045993 kernel: NX (Execute Disable) protection: active Jan 20 00:53:54.046000 kernel: APIC: Static calls initialized Jan 20 00:53:54.046009 kernel: SMBIOS 2.8 present. Jan 20 00:53:54.046015 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:53:54.046020 kernel: Hypervisor detected: KVM Jan 20 00:53:54.046026 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:53:54.046032 kernel: kvm-clock: using sched offset of 3843914190 cycles Jan 20 00:53:54.046038 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:53:54.046044 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:53:54.046082 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:53:54.046089 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:53:54.046095 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:53:54.046104 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:53:54.046110 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:53:54.046116 kernel: Using GB pages for direct mapping Jan 20 00:53:54.046121 kernel: ACPI: Early table checksum verification disabled Jan 20 00:53:54.046127 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:53:54.046133 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046139 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046145 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046153 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:53:54.046159 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046165 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046171 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046177 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:53:54.046183 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:53:54.046189 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:53:54.046198 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:53:54.046206 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:53:54.046213 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:53:54.046219 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:53:54.046225 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:53:54.046231 kernel: No NUMA configuration found Jan 20 00:53:54.046237 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:53:54.046243 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:53:54.046252 kernel: Zone ranges: Jan 20 00:53:54.046258 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:53:54.046264 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:53:54.046270 kernel: Normal empty Jan 20 00:53:54.046276 kernel: Movable zone start for each node Jan 20 00:53:54.046282 kernel: Early memory node ranges Jan 20 00:53:54.046288 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:53:54.046294 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:53:54.046300 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:53:54.046309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:53:54.046315 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:53:54.046321 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:53:54.046327 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:53:54.046333 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:53:54.046339 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:53:54.046346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:53:54.046352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:53:54.046358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:53:54.046366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:53:54.046372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:53:54.046378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:53:54.046384 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:53:54.046391 kernel: TSC deadline timer available Jan 20 00:53:54.046397 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:53:54.046403 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:53:54.046409 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:53:54.046415 kernel: kvm-guest: setup PV sched yield Jan 20 00:53:54.046421 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:53:54.046429 kernel: Booting paravirtualized kernel on KVM Jan 20 00:53:54.046436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:53:54.046442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:53:54.046448 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:53:54.046454 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:53:54.046460 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:53:54.046466 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:53:54.046473 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:53:54.046480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:53:54.046488 kernel: random: crng init done Jan 20 00:53:54.046494 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:53:54.046501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:53:54.046507 kernel: Fallback order for Node 0: 0 Jan 20 00:53:54.046513 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:53:54.046519 kernel: Policy zone: DMA32 Jan 20 00:53:54.046525 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:53:54.046531 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136888K reserved, 0K cma-reserved) Jan 20 00:53:54.046540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:53:54.046546 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:53:54.046552 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:53:54.046558 kernel: Dynamic Preempt: voluntary Jan 20 00:53:54.046564 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:53:54.046571 kernel: rcu: RCU event tracing is enabled. Jan 20 00:53:54.046577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:53:54.046584 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:53:54.046590 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:53:54.046598 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:53:54.046605 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:53:54.046611 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:53:54.046617 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:53:54.046623 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:53:54.046629 kernel: Console: colour VGA+ 80x25 Jan 20 00:53:54.046635 kernel: printk: console [ttyS0] enabled Jan 20 00:53:54.046641 kernel: ACPI: Core revision 20230628 Jan 20 00:53:54.046647 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:53:54.046656 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:53:54.046662 kernel: x2apic enabled Jan 20 00:53:54.046668 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:53:54.046674 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:53:54.046680 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:53:54.046686 kernel: kvm-guest: setup PV IPIs Jan 20 00:53:54.046693 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:53:54.046708 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:53:54.046715 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:53:54.046721 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:53:54.046728 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:53:54.046734 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:53:54.046743 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:53:54.046749 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:53:54.046756 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:53:54.046762 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:53:54.046769 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:53:54.046778 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:53:54.046784 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:53:54.046791 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:53:54.046797 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:53:54.046803 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:53:54.046810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:53:54.046816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:53:54.046823 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:53:54.046831 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:53:54.046838 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:53:54.046844 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:53:54.046851 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:53:54.046857 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:53:54.046864 kernel: landlock: Up and running. Jan 20 00:53:54.046870 kernel: SELinux: Initializing. Jan 20 00:53:54.046877 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:53:54.046883 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:53:54.046892 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:53:54.046898 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:53:54.046905 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:53:54.046911 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:53:54.046918 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:53:54.046924 kernel: signal: max sigframe size: 1776 Jan 20 00:53:54.046930 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:53:54.046937 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:53:54.046944 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:53:54.046976 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:53:54.046983 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:53:54.046989 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:53:54.046995 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:53:54.047002 kernel: smpboot: Max logical packages: 1 Jan 20 00:53:54.047008 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:53:54.047015 kernel: devtmpfs: initialized Jan 20 00:53:54.047021 kernel: x86/mm: Memory block size: 128MB Jan 20 00:53:54.047028 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:53:54.047036 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:53:54.047043 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:53:54.047077 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:53:54.047084 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:53:54.047091 kernel: audit: type=2000 audit(1768870432.363:1): state=initialized audit_enabled=0 res=1 Jan 20 00:53:54.047097 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:53:54.047104 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:53:54.047111 kernel: cpuidle: using governor menu Jan 20 00:53:54.047117 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:53:54.047127 kernel: dca service started, version 1.12.1 Jan 20 00:53:54.047133 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:53:54.047140 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:53:54.047146 kernel: PCI: Using configuration type 1 for base access Jan 20 00:53:54.047153 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:53:54.047159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:53:54.047166 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:53:54.047172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:53:54.047178 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:53:54.047187 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:53:54.047193 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:53:54.047200 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:53:54.047206 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:53:54.047213 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:53:54.047219 kernel: ACPI: Interpreter enabled Jan 20 00:53:54.047225 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:53:54.047232 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:53:54.047238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:53:54.047247 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:53:54.047254 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:53:54.047260 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:53:54.047438 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:53:54.047570 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:53:54.047693 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:53:54.047702 kernel: PCI host bridge to bus 0000:00 Jan 20 00:53:54.047831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:53:54.047943 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:53:54.048133 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:53:54.048248 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:53:54.048357 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:53:54.048465 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:53:54.048573 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:53:54.048718 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:53:54.048846 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:53:54.049000 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:53:54.049195 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:53:54.049318 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:53:54.049438 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:53:54.049571 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:53:54.049693 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:53:54.049813 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:53:54.049932 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:53:54.050134 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:53:54.050259 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:53:54.050378 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:53:54.050503 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:53:54.050630 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:53:54.050840 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:53:54.050994 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:53:54.051217 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:53:54.051338 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:53:54.051464 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:53:54.051592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:53:54.051717 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:53:54.051835 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:53:54.051988 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:53:54.052168 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:53:54.052296 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:53:54.052309 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:53:54.052316 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:53:54.052323 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:53:54.052329 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:53:54.052336 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:53:54.052342 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:53:54.052349 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:53:54.052355 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:53:54.052362 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:53:54.052371 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:53:54.052377 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:53:54.052383 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:53:54.052390 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:53:54.052396 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:53:54.052402 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:53:54.052409 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:53:54.052415 kernel: iommu: Default domain type: Translated Jan 20 00:53:54.052422 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:53:54.052430 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:53:54.052437 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:53:54.052443 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:53:54.052450 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:53:54.052568 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:53:54.052686 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:53:54.052804 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:53:54.052813 kernel: vgaarb: loaded Jan 20 00:53:54.052820 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:53:54.052830 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:53:54.052836 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:53:54.052843 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:53:54.052849 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:53:54.052856 kernel: pnp: PnP ACPI init Jan 20 00:53:54.053018 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:53:54.053030 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:53:54.053036 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:53:54.053081 kernel: NET: Registered PF_INET protocol family Jan 20 00:53:54.053089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:53:54.053095 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:53:54.053102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:53:54.053108 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:53:54.053115 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:53:54.053121 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:53:54.053128 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:53:54.053135 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:53:54.053144 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:53:54.053151 kernel: NET: Registered PF_XDP protocol family Jan 20 00:53:54.053269 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:53:54.053378 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:53:54.053486 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:53:54.053594 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:53:54.053701 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:53:54.053809 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:53:54.053821 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:53:54.053828 kernel: Initialise system trusted keyrings Jan 20 00:53:54.053835 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:53:54.053841 kernel: Key type asymmetric registered Jan 20 00:53:54.053848 kernel: Asymmetric key parser 'x509' registered Jan 20 00:53:54.053854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:53:54.053861 kernel: io scheduler mq-deadline registered Jan 20 00:53:54.053867 kernel: io scheduler kyber registered Jan 20 00:53:54.053873 kernel: io scheduler bfq registered Jan 20 00:53:54.053882 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:53:54.053889 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:53:54.053896 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:53:54.053902 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:53:54.053909 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:53:54.053915 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:53:54.053921 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:53:54.053928 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:53:54.053934 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:53:54.054134 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:53:54.054146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:53:54.054486 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:53:54.054801 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:53:53 UTC (1768870433) Jan 20 00:53:54.055234 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:53:54.055267 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:53:54.055286 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:53:54.055303 kernel: Segment Routing with IPv6 Jan 20 00:53:54.055345 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:53:54.055363 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:53:54.055381 kernel: Key type dns_resolver registered Jan 20 00:53:54.055399 kernel: IPI shorthand broadcast: enabled Jan 20 00:53:54.055416 kernel: sched_clock: Marking stable (1098016176, 344601437)->(1876874421, -434256808) Jan 20 00:53:54.055434 kernel: registered taskstats version 1 Jan 20 00:53:54.055463 kernel: Loading compiled-in X.509 certificates Jan 20 00:53:54.055481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:53:54.055506 kernel: Key type .fscrypt registered Jan 20 00:53:54.055527 kernel: Key type fscrypt-provisioning registered Jan 20 00:53:54.055545 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:53:54.055562 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:53:54.055580 kernel: ima: No architecture policies found Jan 20 00:53:54.055616 kernel: clk: Disabling unused clocks Jan 20 00:53:54.055634 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:53:54.055640 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:53:54.055658 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:53:54.055686 kernel: Run /init as init process Jan 20 00:53:54.055707 kernel: with arguments: Jan 20 00:53:54.055724 kernel: /init Jan 20 00:53:54.055742 kernel: with environment: Jan 20 00:53:54.055769 kernel: HOME=/ Jan 20 00:53:54.055786 kernel: TERM=linux Jan 20 00:53:54.055806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:53:54.055836 systemd[1]: Detected virtualization kvm. Jan 20 00:53:54.055854 systemd[1]: Detected architecture x86-64. Jan 20 00:53:54.055893 systemd[1]: Running in initrd. Jan 20 00:53:54.055900 systemd[1]: No hostname configured, using default hostname. Jan 20 00:53:54.055929 systemd[1]: Hostname set to . Jan 20 00:53:54.055967 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:53:54.055987 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:53:54.056005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:53:54.056023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:53:54.056128 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:53:54.056170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:53:54.056177 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:53:54.056206 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:53:54.056225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:53:54.056243 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:53:54.056273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:53:54.056304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:53:54.056330 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:53:54.056348 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:53:54.056366 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:53:54.056426 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:53:54.056447 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:53:54.056466 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:53:54.056506 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:53:54.056524 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:53:54.056545 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:53:54.056563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:53:54.056592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:53:54.056610 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:53:54.056628 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:53:54.056646 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:53:54.056678 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:53:54.056715 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:53:54.056733 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:53:54.056751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:53:54.056769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:53:54.056787 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:53:54.056805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:53:54.056834 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:53:54.056855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:53:54.056863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:53:54.056870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:53:54.056878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:53:54.056905 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:53:54.056923 systemd-journald[194]: Journal started Jan 20 00:53:54.056938 systemd-journald[194]: Runtime Journal (/run/log/journal/64143243e2ec417e9c5ebeeb892002ca) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:53:54.033029 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:53:54.177126 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:53:54.177150 kernel: Bridge firewalling registered Jan 20 00:53:54.177160 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:53:54.059213 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:53:54.177303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:53:54.180646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:53:54.197233 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:53:54.200981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:53:54.205557 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:53:54.219376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:53:54.226220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:53:54.231932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:53:54.252366 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:53:54.257554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:53:54.266670 dracut-cmdline[229]: dracut-dracut-053 Jan 20 00:53:54.271603 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:53:54.293294 systemd-resolved[232]: Positive Trust Anchors: Jan 20 00:53:54.293324 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:53:54.293350 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:53:54.295604 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 20 00:53:54.296678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:53:54.317571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:53:54.397130 kernel: SCSI subsystem initialized Jan 20 00:53:54.406134 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:53:54.417128 kernel: iscsi: registered transport (tcp) Jan 20 00:53:54.438480 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:53:54.438521 kernel: QLogic iSCSI HBA Driver Jan 20 00:53:54.485306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:53:54.499222 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:53:54.525732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:53:54.525765 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:53:54.528366 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:53:54.570121 kernel: raid6: avx2x4 gen() 30861 MB/s Jan 20 00:53:54.588108 kernel: raid6: avx2x2 gen() 27903 MB/s Jan 20 00:53:54.607108 kernel: raid6: avx2x1 gen() 23885 MB/s Jan 20 00:53:54.607137 kernel: raid6: using algorithm avx2x4 gen() 30861 MB/s Jan 20 00:53:54.627140 kernel: raid6: .... xor() 4653 MB/s, rmw enabled Jan 20 00:53:54.627165 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:53:54.648115 kernel: xor: automatically using best checksumming function avx Jan 20 00:53:54.790144 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:53:54.803031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:53:54.814352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:53:54.826617 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 20 00:53:54.831156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:53:54.844243 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:53:54.859880 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 20 00:53:54.893258 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:53:54.903305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:53:54.970439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:53:54.989240 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:53:55.000355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:53:55.007476 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:53:55.010991 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:53:55.020306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:53:55.030149 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:53:55.030360 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:53:55.034350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:53:55.042982 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:53:55.045654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:53:55.061347 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:53:55.061367 kernel: GPT:9289727 != 19775487 Jan 20 00:53:55.061383 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:53:55.061393 kernel: GPT:9289727 != 19775487 Jan 20 00:53:55.061402 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:53:55.061411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:53:55.051886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:53:55.069575 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:53:55.069590 kernel: AES CTR mode by8 optimization enabled Jan 20 00:53:55.062080 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:53:55.066846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:53:55.067113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:53:55.074572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:53:55.091273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:53:55.110127 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Jan 20 00:53:55.099733 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:53:55.122119 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (470) Jan 20 00:53:55.125241 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:53:55.270558 kernel: libata version 3.00 loaded. Jan 20 00:53:55.270582 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:53:55.270815 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:53:55.270835 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:53:55.271014 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:53:55.271238 kernel: scsi host0: ahci Jan 20 00:53:55.271402 kernel: scsi host1: ahci Jan 20 00:53:55.271553 kernel: scsi host2: ahci Jan 20 00:53:55.271694 kernel: scsi host3: ahci Jan 20 00:53:55.271841 kernel: scsi host4: ahci Jan 20 00:53:55.272023 kernel: scsi host5: ahci Jan 20 00:53:55.272228 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 20 00:53:55.272239 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 20 00:53:55.272249 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 20 00:53:55.272258 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 20 00:53:55.272267 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 20 00:53:55.272280 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 20 00:53:55.277851 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:53:55.279373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:53:55.294092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:53:55.306324 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:53:55.307851 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:53:55.328229 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:53:55.330597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:53:55.348514 disk-uuid[570]: Primary Header is updated. Jan 20 00:53:55.348514 disk-uuid[570]: Secondary Entries is updated. Jan 20 00:53:55.348514 disk-uuid[570]: Secondary Header is updated. Jan 20 00:53:55.355778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:53:55.358101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:53:55.370572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:53:55.466079 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:53:55.466130 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:53:55.466142 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:53:55.467098 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:53:55.469120 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:53:55.471096 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:53:55.473139 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:53:55.475878 kernel: ata3.00: applying bridge limits Jan 20 00:53:55.478130 kernel: ata3.00: configured for UDMA/100 Jan 20 00:53:55.481141 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:53:55.522459 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:53:55.522723 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:53:55.535120 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:53:56.361132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:53:56.361535 disk-uuid[572]: The operation has completed successfully. Jan 20 00:53:56.391705 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:53:56.391865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:53:56.414296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:53:56.420384 sh[594]: Success Jan 20 00:53:56.436098 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:53:56.474986 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:53:56.490661 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:53:56.497688 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:53:56.510400 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:53:56.510445 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:53:56.510462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:53:56.513132 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:53:56.515089 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:53:56.523493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:53:56.525409 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:53:56.544200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:53:56.546354 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:53:56.569878 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:53:56.569911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:53:56.569923 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:53:56.576204 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:53:56.587526 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:53:56.594123 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:53:56.603181 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:53:56.614278 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:53:56.669732 ignition[704]: Ignition 2.19.0 Jan 20 00:53:56.669774 ignition[704]: Stage: fetch-offline Jan 20 00:53:56.669837 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:56.669854 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:56.670043 ignition[704]: parsed url from cmdline: "" Jan 20 00:53:56.670122 ignition[704]: no config URL provided Jan 20 00:53:56.670132 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:53:56.670149 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:53:56.670186 ignition[704]: op(1): [started] loading QEMU firmware config module Jan 20 00:53:56.685260 unknown[704]: fetched base config from "system" Jan 20 00:53:56.670195 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:53:56.685268 unknown[704]: fetched user config from "qemu" Jan 20 00:53:56.682186 ignition[704]: op(1): [finished] loading QEMU firmware config module Jan 20 00:53:56.688737 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:53:56.682947 ignition[704]: parsing config with SHA512: 5e14f97f5486b593abaa6d1775e938bc2235e6a113f607367ef950fdbedc8c0687e74bd26a363b9bb458e037de5342102d6cbee813a05feb26b17d3c256c885d Jan 20 00:53:56.685491 ignition[704]: fetch-offline: fetch-offline passed Jan 20 00:53:56.685560 ignition[704]: Ignition finished successfully Jan 20 00:53:56.725179 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:53:56.737322 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:53:56.761923 systemd-networkd[784]: lo: Link UP Jan 20 00:53:56.761979 systemd-networkd[784]: lo: Gained carrier Jan 20 00:53:56.763671 systemd-networkd[784]: Enumeration completed Jan 20 00:53:56.763813 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:53:56.764510 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:53:56.764514 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:53:56.765552 systemd-networkd[784]: eth0: Link UP Jan 20 00:53:56.765556 systemd-networkd[784]: eth0: Gained carrier Jan 20 00:53:56.765563 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:53:56.767326 systemd[1]: Reached target network.target - Network. Jan 20 00:53:56.771686 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:53:56.792241 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:53:56.804125 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:53:56.810305 ignition[786]: Ignition 2.19.0 Jan 20 00:53:56.810336 ignition[786]: Stage: kargs Jan 20 00:53:56.810494 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:56.813864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:53:56.810506 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:56.811173 ignition[786]: kargs: kargs passed Jan 20 00:53:56.811213 ignition[786]: Ignition finished successfully Jan 20 00:53:56.826233 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:53:56.841281 ignition[795]: Ignition 2.19.0 Jan 20 00:53:56.841303 ignition[795]: Stage: disks Jan 20 00:53:56.841455 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:56.841466 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:56.841985 ignition[795]: disks: disks passed Jan 20 00:53:56.842033 ignition[795]: Ignition finished successfully Jan 20 00:53:56.854130 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:53:56.859205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:53:56.860621 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:53:56.865882 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:53:56.871924 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:53:56.876845 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:53:56.900458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:53:56.921669 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:53:56.928438 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:53:56.954264 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:53:57.052112 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:53:57.052459 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:53:57.054121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:53:57.065230 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:53:57.070726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:53:57.072148 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:53:57.072188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:53:57.090204 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Jan 20 00:53:57.072210 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:53:57.099854 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:53:57.099876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:53:57.099886 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:53:57.105271 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:53:57.104670 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:53:57.106715 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:53:57.116101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:53:57.155613 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:53:57.162492 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:53:57.169921 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:53:57.176789 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:53:57.279889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:53:57.290273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:53:57.295390 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:53:57.304614 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:53:57.320810 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:53:57.328667 ignition[928]: INFO : Ignition 2.19.0 Jan 20 00:53:57.328667 ignition[928]: INFO : Stage: mount Jan 20 00:53:57.332323 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:57.332323 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:57.332323 ignition[928]: INFO : mount: mount passed Jan 20 00:53:57.332323 ignition[928]: INFO : Ignition finished successfully Jan 20 00:53:57.335427 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:53:57.348311 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:53:57.506694 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:53:57.520316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:53:57.530113 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jan 20 00:53:57.535623 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:53:57.535648 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:53:57.535659 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:53:57.543121 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:53:57.544644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:53:57.570987 ignition[957]: INFO : Ignition 2.19.0 Jan 20 00:53:57.570987 ignition[957]: INFO : Stage: files Jan 20 00:53:57.575447 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:57.575447 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:57.575447 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:53:57.575447 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:53:57.575447 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:53:57.591864 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:53:57.591864 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:53:57.591864 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:53:57.591864 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 00:53:57.579416 unknown[957]: wrote ssh authorized keys file for user: core Jan 20 00:53:57.795142 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 20 00:53:57.935286 systemd-networkd[784]: eth0: Gained IPv6LL Jan 20 00:53:58.138483 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 00:53:58.138483 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 20 00:53:58.147981 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:53:58.147981 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:53:58.147981 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 20 00:53:58.147981 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:53:58.166493 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:53:58.170413 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:53:58.170413 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:53:58.170413 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:53:58.170413 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:53:58.170413 ignition[957]: INFO : files: files passed Jan 20 00:53:58.170413 ignition[957]: INFO : Ignition finished successfully Jan 20 00:53:58.171572 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:53:58.196215 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:53:58.199864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:53:58.205408 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:53:58.205543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:53:58.216792 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:53:58.220321 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:53:58.220321 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:53:58.217404 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:53:58.237359 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:53:58.223887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:53:58.248289 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:53:58.271343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:53:58.271493 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:53:58.277179 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:53:58.282833 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:53:58.283841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:53:58.296263 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:53:58.315297 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:53:58.333222 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:53:58.342621 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:53:58.345819 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:53:58.351628 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:53:58.356785 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:53:58.356906 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:53:58.362818 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:53:58.367407 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:53:58.372639 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:53:58.378037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:53:58.383389 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:53:58.389014 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:53:58.394648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:53:58.400545 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:53:58.405765 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:53:58.411551 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:53:58.416216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:53:58.416315 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:53:58.421934 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:53:58.426205 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:53:58.431614 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:53:58.431867 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:53:58.437515 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:53:58.437636 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:53:58.443494 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:53:58.443617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:53:58.449016 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:53:58.453623 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:53:58.457259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:53:58.462850 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:53:58.467769 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:53:58.472834 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:53:58.472950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:53:58.478266 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:53:58.478370 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:53:58.483018 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:53:58.483187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:53:58.488873 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:53:58.529011 ignition[1012]: INFO : Ignition 2.19.0 Jan 20 00:53:58.529011 ignition[1012]: INFO : Stage: umount Jan 20 00:53:58.529011 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:53:58.529011 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:53:58.529011 ignition[1012]: INFO : umount: umount passed Jan 20 00:53:58.529011 ignition[1012]: INFO : Ignition finished successfully Jan 20 00:53:58.489017 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:53:58.509259 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:53:58.512357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:53:58.516840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:53:58.517343 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:53:58.523409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:53:58.523508 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:53:58.530887 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:53:58.531043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:53:58.535654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:53:58.535780 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:53:58.543257 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:53:58.544031 systemd[1]: Stopped target network.target - Network. Jan 20 00:53:58.548914 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:53:58.549012 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:53:58.554246 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:53:58.554299 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:53:58.560308 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:53:58.560383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:53:58.565329 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:53:58.565408 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:53:58.568640 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:53:58.573614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:53:58.582473 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:53:58.582697 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:53:58.584149 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 20 00:53:58.588867 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:53:58.589105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:53:58.594696 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:53:58.594763 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:53:58.615187 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:53:58.619261 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:53:58.619320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:53:58.625110 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:53:58.625163 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:53:58.627883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:53:58.627930 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:53:58.632949 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:53:58.633035 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:53:58.636379 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:53:58.642179 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:53:58.642282 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:53:58.661364 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:53:58.661448 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:53:58.666006 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:53:58.666230 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:53:58.671824 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:53:58.671984 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:53:58.676695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:53:58.676759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:53:58.681422 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:53:58.681464 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:53:58.686887 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:53:58.686946 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:53:58.692181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:53:58.692231 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:53:58.697565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:53:58.697615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:53:58.717333 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:53:58.722436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:53:58.722497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:53:58.728242 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:53:58.728295 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:53:58.822244 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:53:58.734539 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:53:58.734592 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:53:58.737899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:53:58.737947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:53:58.744543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:53:58.744668 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:53:58.749941 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:53:58.774223 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:53:58.781309 systemd[1]: Switching root. Jan 20 00:53:58.849299 systemd-journald[194]: Journal stopped Jan 20 00:53:59.946898 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:53:59.947022 kernel: SELinux: policy capability open_perms=1 Jan 20 00:53:59.947040 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:53:59.947091 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:53:59.947103 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:53:59.947113 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:53:59.947123 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:53:59.947133 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:53:59.947149 kernel: audit: type=1403 audit(1768870438.973:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:53:59.947165 systemd[1]: Successfully loaded SELinux policy in 44.504ms. Jan 20 00:53:59.947182 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.315ms. Jan 20 00:53:59.947197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:53:59.947209 systemd[1]: Detected virtualization kvm. Jan 20 00:53:59.947219 systemd[1]: Detected architecture x86-64. Jan 20 00:53:59.947230 systemd[1]: Detected first boot. Jan 20 00:53:59.947241 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:53:59.947259 zram_generator::config[1059]: No configuration found. Jan 20 00:53:59.947283 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:53:59.947308 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:53:59.947327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:53:59.947346 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:53:59.947365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:53:59.947388 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:53:59.947407 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:53:59.947418 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:53:59.947433 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:53:59.947443 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:53:59.947454 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:53:59.947465 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:53:59.947475 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:53:59.947486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:53:59.947497 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:53:59.947507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:53:59.947521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:53:59.947531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:53:59.947542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:53:59.947553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:53:59.947564 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:53:59.947574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:53:59.947585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:53:59.947595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:53:59.947608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:53:59.947619 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:53:59.947631 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:53:59.947642 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:53:59.947653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:53:59.947663 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:53:59.947674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:53:59.947685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:53:59.947695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:53:59.947708 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:53:59.947719 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:53:59.947730 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:53:59.947740 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:53:59.947751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:59.947762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:53:59.947772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:53:59.947788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:53:59.947799 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:53:59.947812 systemd[1]: Reached target machines.target - Containers. Jan 20 00:53:59.947823 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:53:59.947834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:53:59.947844 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:53:59.947855 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:53:59.947867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:53:59.947877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:53:59.947888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:53:59.947901 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:53:59.947911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:53:59.947922 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:53:59.947933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:53:59.947944 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:53:59.947987 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:53:59.948007 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:53:59.948023 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:53:59.948124 kernel: ACPI: bus type drm_connector registered Jan 20 00:53:59.948142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:53:59.948154 kernel: fuse: init (API version 7.39) Jan 20 00:53:59.948164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:53:59.948175 kernel: loop: module loaded Jan 20 00:53:59.948206 systemd-journald[1143]: Collecting audit messages is disabled. Jan 20 00:53:59.948228 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:53:59.948242 systemd-journald[1143]: Journal started Jan 20 00:53:59.948260 systemd-journald[1143]: Runtime Journal (/run/log/journal/64143243e2ec417e9c5ebeeb892002ca) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:53:59.536329 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:53:59.556785 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:53:59.557413 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:53:59.557738 systemd[1]: systemd-journald.service: Consumed 1.298s CPU time. Jan 20 00:53:59.956630 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:53:59.962424 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:53:59.962490 systemd[1]: Stopped verity-setup.service. Jan 20 00:53:59.970100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:53:59.975199 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:53:59.978331 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:53:59.981152 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:53:59.984003 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:53:59.986581 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:53:59.989438 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:53:59.992313 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:53:59.995127 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:53:59.998465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:54:00.001995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:54:00.002236 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:54:00.005755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:54:00.005980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:54:00.009289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:54:00.009487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:54:00.012558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:54:00.012782 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:54:00.016506 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:54:00.016803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:54:00.019836 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:54:00.020099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:54:00.023230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:54:00.026526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:54:00.030128 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:54:00.047374 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:54:00.057221 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:54:00.061322 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:54:00.064137 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:54:00.064186 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:54:00.068433 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:54:00.072777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:54:00.076891 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:54:00.080128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:54:00.081766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:54:00.086095 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:54:00.089690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:54:00.094990 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:54:00.098300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:54:00.099944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:54:00.108175 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:54:00.114601 systemd-journald[1143]: Time spent on flushing to /var/log/journal/64143243e2ec417e9c5ebeeb892002ca is 13.622ms for 926 entries. Jan 20 00:54:00.114601 systemd-journald[1143]: System Journal (/var/log/journal/64143243e2ec417e9c5ebeeb892002ca) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:54:00.164681 systemd-journald[1143]: Received client request to flush runtime journal. Jan 20 00:54:00.164736 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:54:00.115218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:54:00.124894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:54:00.128547 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:54:00.134777 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:54:00.138949 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:54:00.143442 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:54:00.147729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:54:00.155433 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:54:00.165356 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 20 00:54:00.172916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:54:00.165368 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 20 00:54:00.172373 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:54:00.177996 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:54:00.183560 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:54:00.189168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:54:00.199702 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:54:00.206141 kernel: loop1: detected capacity change from 0 to 142488 Jan 20 00:54:00.207454 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:54:00.211330 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:54:00.221440 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:54:00.246831 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:54:00.262505 kernel: loop2: detected capacity change from 0 to 229808 Jan 20 00:54:00.257279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:54:00.278617 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 20 00:54:00.278635 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 20 00:54:00.284855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:54:00.309118 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:54:00.324097 kernel: loop4: detected capacity change from 0 to 142488 Jan 20 00:54:00.343098 kernel: loop5: detected capacity change from 0 to 229808 Jan 20 00:54:00.350844 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:54:00.353086 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 20 00:54:00.357008 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:54:00.357154 systemd[1]: Reloading... Jan 20 00:54:00.409171 zram_generator::config[1225]: No configuration found. Jan 20 00:54:00.460773 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:54:00.520861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:54:00.562373 systemd[1]: Reloading finished in 204 ms. Jan 20 00:54:00.592947 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:54:00.596314 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:54:00.599637 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:54:00.621338 systemd[1]: Starting ensure-sysext.service... Jan 20 00:54:00.624486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:54:00.628745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:54:00.646257 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:54:00.646270 systemd[1]: Reloading... Jan 20 00:54:00.647032 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:54:00.647430 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:54:00.648401 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:54:00.648676 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:54:00.648774 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 20 00:54:00.652306 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:54:00.652329 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:54:00.663520 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:54:00.663550 systemd-tmpfiles[1265]: Skipping /boot Jan 20 00:54:00.670661 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Jan 20 00:54:00.703098 zram_generator::config[1292]: No configuration found. Jan 20 00:54:00.763122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Jan 20 00:54:00.810100 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:54:00.816999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:54:00.818134 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:54:00.836096 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:54:00.885840 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:54:00.894743 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:54:00.894952 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:54:00.935277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:54:00.940420 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:54:00.941558 systemd[1]: Reloading finished in 294 ms. Jan 20 00:54:00.970098 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:54:00.989887 kernel: kvm_amd: TSC scaling supported Jan 20 00:54:00.989938 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:54:00.989982 kernel: kvm_amd: Nested Paging enabled Jan 20 00:54:00.992649 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:54:00.992673 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:54:00.994568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:54:01.030839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:54:01.040125 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:54:01.053023 systemd[1]: Finished ensure-sysext.service. Jan 20 00:54:01.069137 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:54:01.078229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:54:01.089238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:54:01.093935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:54:01.097175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:54:01.098294 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:54:01.104526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:54:01.108207 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:54:01.115857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:54:01.117769 lvm[1367]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:54:01.122267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:54:01.127843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:54:01.130341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:54:01.137036 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:54:01.147390 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:54:01.153304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:54:01.158267 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:54:01.162264 augenrules[1388]: No rules Jan 20 00:54:01.163283 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:54:01.168315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:54:01.171199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:54:01.172494 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:54:01.175668 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:54:01.179578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:54:01.179741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:54:01.183415 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:54:01.183660 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:54:01.186730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:54:01.186936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:54:01.190561 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:54:01.190799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:54:01.192151 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:54:01.199859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:54:01.208360 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:54:01.209818 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:54:01.209935 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:54:01.211814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:54:01.214592 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:54:01.217400 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:54:01.220450 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:54:01.229350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:54:01.231229 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:54:01.232112 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:54:01.246941 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:54:01.249324 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:54:01.262456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:54:01.325664 systemd-networkd[1386]: lo: Link UP Jan 20 00:54:01.325695 systemd-networkd[1386]: lo: Gained carrier Jan 20 00:54:01.327481 systemd-networkd[1386]: Enumeration completed Jan 20 00:54:01.328285 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:54:01.328315 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:54:01.329234 systemd-networkd[1386]: eth0: Link UP Jan 20 00:54:01.329259 systemd-networkd[1386]: eth0: Gained carrier Jan 20 00:54:01.329270 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:54:01.329763 systemd-resolved[1390]: Positive Trust Anchors: Jan 20 00:54:01.329801 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:54:01.329829 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:54:01.333647 systemd-resolved[1390]: Defaulting to hostname 'linux'. Jan 20 00:54:01.344110 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:54:01.940037 systemd-resolved[1390]: Clock change detected. Flushing caches. Jan 20 00:54:01.940076 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:54:01.940130 systemd-timesyncd[1391]: Initial clock synchronization to Tue 2026-01-20 00:54:01.939982 UTC. Jan 20 00:54:01.966023 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:54:01.966620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:54:01.967455 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:54:01.968414 systemd[1]: Reached target network.target - Network. Jan 20 00:54:01.968716 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:54:01.969109 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:54:01.992861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:54:01.996220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:54:01.999967 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:54:02.002883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:54:02.006035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:54:02.009477 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:54:02.012312 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:54:02.015823 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:54:02.019064 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:54:02.019111 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:54:02.021393 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:54:02.024335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:54:02.029043 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:54:02.036045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:54:02.039497 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:54:02.042462 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:54:02.045049 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:54:02.047524 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:54:02.047578 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:54:02.048944 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:54:02.052881 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:54:02.056551 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:54:02.060735 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:54:02.063501 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:54:02.065871 jq[1432]: false Jan 20 00:54:02.066301 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:54:02.072490 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:54:02.078837 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:54:02.080459 dbus-daemon[1431]: [system] SELinux support is enabled Jan 20 00:54:02.082509 extend-filesystems[1433]: Found loop3 Jan 20 00:54:02.084664 extend-filesystems[1433]: Found loop4 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found loop5 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found sr0 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda1 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda2 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda3 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found usr Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda4 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda6 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda7 Jan 20 00:54:02.086729 extend-filesystems[1433]: Found vda9 Jan 20 00:54:02.086729 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 20 00:54:02.140323 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1324) Jan 20 00:54:02.140350 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:54:02.140364 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 20 00:54:02.100904 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:54:02.143572 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:54:02.105984 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:54:02.153995 update_engine[1451]: I20260120 00:54:02.150869 1451 main.cc:92] Flatcar Update Engine starting Jan 20 00:54:02.153995 update_engine[1451]: I20260120 00:54:02.153162 1451 update_check_scheduler.cc:74] Next update check in 10m51s Jan 20 00:54:02.106444 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:54:02.108451 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:54:02.154440 jq[1453]: true Jan 20 00:54:02.121801 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:54:02.128300 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:54:02.146096 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:54:02.146290 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:54:02.146660 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:54:02.146879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:54:02.152399 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:54:02.152622 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:54:02.170722 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:54:02.179073 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:54:02.191559 jq[1456]: true Jan 20 00:54:02.191787 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:54:02.191810 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:54:02.192642 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:54:02.200186 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:54:02.200186 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:54:02.200186 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:54:02.196763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:54:02.221655 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 20 00:54:02.196789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:54:02.224383 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:54:02.200091 systemd-logind[1448]: New seat seat0. Jan 20 00:54:02.205237 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:54:02.205257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:54:02.225897 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:54:02.229249 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:54:02.232650 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:54:02.232995 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:54:02.236860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:54:02.243045 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:54:02.248372 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:54:02.352860 containerd[1457]: time="2026-01-20T00:54:02.352616388Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:54:02.371096 containerd[1457]: time="2026-01-20T00:54:02.371006369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373320 containerd[1457]: time="2026-01-20T00:54:02.373270146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373320 containerd[1457]: time="2026-01-20T00:54:02.373308899Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:54:02.373389 containerd[1457]: time="2026-01-20T00:54:02.373323205Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:54:02.373521 containerd[1457]: time="2026-01-20T00:54:02.373474818Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:54:02.373521 containerd[1457]: time="2026-01-20T00:54:02.373511948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373639 containerd[1457]: time="2026-01-20T00:54:02.373581478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373639 containerd[1457]: time="2026-01-20T00:54:02.373636701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373903 containerd[1457]: time="2026-01-20T00:54:02.373857333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373903 containerd[1457]: time="2026-01-20T00:54:02.373889493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373946 containerd[1457]: time="2026-01-20T00:54:02.373905903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:54:02.373946 containerd[1457]: time="2026-01-20T00:54:02.373914780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.374038 containerd[1457]: time="2026-01-20T00:54:02.374008786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.374298 containerd[1457]: time="2026-01-20T00:54:02.374250798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:54:02.374411 containerd[1457]: time="2026-01-20T00:54:02.374382353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:54:02.374433 containerd[1457]: time="2026-01-20T00:54:02.374409804Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:54:02.374549 containerd[1457]: time="2026-01-20T00:54:02.374520862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:54:02.374644 containerd[1457]: time="2026-01-20T00:54:02.374618474Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:54:02.380666 containerd[1457]: time="2026-01-20T00:54:02.380575979Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:54:02.380746 containerd[1457]: time="2026-01-20T00:54:02.380664283Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:54:02.380746 containerd[1457]: time="2026-01-20T00:54:02.380716872Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:54:02.380746 containerd[1457]: time="2026-01-20T00:54:02.380731910Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:54:02.380746 containerd[1457]: time="2026-01-20T00:54:02.380743251Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:54:02.380933 containerd[1457]: time="2026-01-20T00:54:02.380855391Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:54:02.381237 containerd[1457]: time="2026-01-20T00:54:02.381149433Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:54:02.382041 containerd[1457]: time="2026-01-20T00:54:02.381504416Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:54:02.382041 containerd[1457]: time="2026-01-20T00:54:02.381536646Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:54:02.382041 containerd[1457]: time="2026-01-20T00:54:02.381561783Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:54:02.382041 containerd[1457]: time="2026-01-20T00:54:02.382028791Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382277 containerd[1457]: time="2026-01-20T00:54:02.382244425Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382365 containerd[1457]: time="2026-01-20T00:54:02.382279330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382543 containerd[1457]: time="2026-01-20T00:54:02.382522214Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382737 containerd[1457]: time="2026-01-20T00:54:02.382640816Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382737 containerd[1457]: time="2026-01-20T00:54:02.382715205Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382737 containerd[1457]: time="2026-01-20T00:54:02.382733950Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382802 containerd[1457]: time="2026-01-20T00:54:02.382745190Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:54:02.382802 containerd[1457]: time="2026-01-20T00:54:02.382765589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382802 containerd[1457]: time="2026-01-20T00:54:02.382778543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382802 containerd[1457]: time="2026-01-20T00:54:02.382790505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382802 containerd[1457]: time="2026-01-20T00:54:02.382802247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382815482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382828126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382838735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382849265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382860165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382873931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382884912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.382893 containerd[1457]: time="2026-01-20T00:54:02.382894990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383011 containerd[1457]: time="2026-01-20T00:54:02.382906902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383011 containerd[1457]: time="2026-01-20T00:54:02.382919766Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:54:02.383011 containerd[1457]: time="2026-01-20T00:54:02.382941377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383011 containerd[1457]: time="2026-01-20T00:54:02.382951986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383011 containerd[1457]: time="2026-01-20T00:54:02.382961976Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383023871Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383039620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383048998Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383059467Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383071991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383082641Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:54:02.383092 containerd[1457]: time="2026-01-20T00:54:02.383091287Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:54:02.383201 containerd[1457]: time="2026-01-20T00:54:02.383100715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:54:02.383364 containerd[1457]: time="2026-01-20T00:54:02.383300036Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:54:02.383364 containerd[1457]: time="2026-01-20T00:54:02.383364447Z" level=info msg="Connect containerd service" Jan 20 00:54:02.383525 containerd[1457]: time="2026-01-20T00:54:02.383393892Z" level=info msg="using legacy CRI server" Jan 20 00:54:02.383525 containerd[1457]: time="2026-01-20T00:54:02.383400825Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:54:02.383525 containerd[1457]: time="2026-01-20T00:54:02.383470705Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:54:02.384168 containerd[1457]: time="2026-01-20T00:54:02.384107154Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:54:02.384373 containerd[1457]: time="2026-01-20T00:54:02.384323346Z" level=info msg="Start subscribing containerd event" Jan 20 00:54:02.384400 containerd[1457]: time="2026-01-20T00:54:02.384382407Z" level=info msg="Start recovering state" Jan 20 00:54:02.384471 containerd[1457]: time="2026-01-20T00:54:02.384434566Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:54:02.384526 containerd[1457]: time="2026-01-20T00:54:02.384438451Z" level=info msg="Start event monitor" Jan 20 00:54:02.384546 containerd[1457]: time="2026-01-20T00:54:02.384529181Z" level=info msg="Start snapshots syncer" Jan 20 00:54:02.384546 containerd[1457]: time="2026-01-20T00:54:02.384539941Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:54:02.384577 containerd[1457]: time="2026-01-20T00:54:02.384547395Z" level=info msg="Start streaming server" Jan 20 00:54:02.384668 containerd[1457]: time="2026-01-20T00:54:02.384530224Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:54:02.387735 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:54:02.389322 containerd[1457]: time="2026-01-20T00:54:02.389242553Z" level=info msg="containerd successfully booted in 0.038376s" Jan 20 00:54:02.398336 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:54:02.421527 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:54:02.433111 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:54:02.445803 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:54:02.446053 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:54:02.459164 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:54:02.475067 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:54:02.488016 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:54:02.491752 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:54:02.494617 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:54:03.834978 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 20 00:54:03.838310 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:54:03.842190 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:54:03.854914 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:54:03.858950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:54:03.863035 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:54:03.884062 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:54:03.884308 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:54:03.887737 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:54:03.891044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:54:04.584816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:54:04.588352 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:54:04.591866 systemd[1]: Startup finished in 1.236s (kernel) + 5.255s (initrd) + 5.074s (userspace) = 11.567s. Jan 20 00:54:04.591943 (kubelet)[1535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:54:05.026871 kubelet[1535]: E0120 00:54:05.026747 1535 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:54:05.030258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:54:05.030478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:54:07.642960 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:54:07.644392 systemd[1]: Started sshd@0-10.0.0.161:22-10.0.0.1:39180.service - OpenSSH per-connection server daemon (10.0.0.1:39180). Jan 20 00:54:07.693720 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 39180 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:07.696158 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:07.707139 systemd-logind[1448]: New session 1 of user core. Jan 20 00:54:07.708913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:54:07.723081 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:54:07.740422 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:54:07.755093 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:54:07.759587 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:54:07.854915 systemd[1553]: Queued start job for default target default.target. Jan 20 00:54:07.867060 systemd[1553]: Created slice app.slice - User Application Slice. Jan 20 00:54:07.867105 systemd[1553]: Reached target paths.target - Paths. Jan 20 00:54:07.867118 systemd[1553]: Reached target timers.target - Timers. Jan 20 00:54:07.868794 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:54:07.880344 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:54:07.880481 systemd[1553]: Reached target sockets.target - Sockets. Jan 20 00:54:07.880517 systemd[1553]: Reached target basic.target - Basic System. Jan 20 00:54:07.880555 systemd[1553]: Reached target default.target - Main User Target. Jan 20 00:54:07.880591 systemd[1553]: Startup finished in 112ms. Jan 20 00:54:07.880929 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:54:07.882665 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:54:07.947550 systemd[1]: Started sshd@1-10.0.0.161:22-10.0.0.1:39182.service - OpenSSH per-connection server daemon (10.0.0.1:39182). Jan 20 00:54:07.983097 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 39182 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:07.984591 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:07.989128 systemd-logind[1448]: New session 2 of user core. Jan 20 00:54:07.998854 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:54:08.053936 sshd[1564]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:08.065040 systemd[1]: sshd@1-10.0.0.161:22-10.0.0.1:39182.service: Deactivated successfully. Jan 20 00:54:08.066510 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:54:08.068113 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:54:08.077944 systemd[1]: Started sshd@2-10.0.0.161:22-10.0.0.1:39186.service - OpenSSH per-connection server daemon (10.0.0.1:39186). Jan 20 00:54:08.078917 systemd-logind[1448]: Removed session 2. Jan 20 00:54:08.109506 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 39186 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:08.111134 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:08.115426 systemd-logind[1448]: New session 3 of user core. Jan 20 00:54:08.124834 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:54:08.175426 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:08.182297 systemd[1]: sshd@2-10.0.0.161:22-10.0.0.1:39186.service: Deactivated successfully. Jan 20 00:54:08.183979 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:54:08.185628 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:54:08.186958 systemd[1]: Started sshd@3-10.0.0.161:22-10.0.0.1:39190.service - OpenSSH per-connection server daemon (10.0.0.1:39190). Jan 20 00:54:08.187772 systemd-logind[1448]: Removed session 3. Jan 20 00:54:08.222517 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 39190 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:08.224062 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:08.228002 systemd-logind[1448]: New session 4 of user core. Jan 20 00:54:08.241852 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:54:08.296894 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:08.312301 systemd[1]: sshd@3-10.0.0.161:22-10.0.0.1:39190.service: Deactivated successfully. Jan 20 00:54:08.314529 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:54:08.316408 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:54:08.323985 systemd[1]: Started sshd@4-10.0.0.161:22-10.0.0.1:39200.service - OpenSSH per-connection server daemon (10.0.0.1:39200). Jan 20 00:54:08.325017 systemd-logind[1448]: Removed session 4. Jan 20 00:54:08.365037 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39200 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:08.361570 systemd-logind[1448]: New session 5 of user core. Jan 20 00:54:08.356925 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:08.377889 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:54:08.438215 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:54:08.438573 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:54:08.457295 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 20 00:54:08.459479 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:08.473298 systemd[1]: sshd@4-10.0.0.161:22-10.0.0.1:39200.service: Deactivated successfully. Jan 20 00:54:08.475192 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:54:08.476742 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:54:08.478533 systemd[1]: Started sshd@5-10.0.0.161:22-10.0.0.1:39208.service - OpenSSH per-connection server daemon (10.0.0.1:39208). Jan 20 00:54:08.479459 systemd-logind[1448]: Removed session 5. Jan 20 00:54:08.514161 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 39208 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:08.515549 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:08.520370 systemd-logind[1448]: New session 6 of user core. Jan 20 00:54:08.529841 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:54:08.584332 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:54:08.584762 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:54:08.588755 sudo[1597]: pam_unix(sudo:session): session closed for user root Jan 20 00:54:08.595290 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:54:08.595722 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:54:08.615928 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:54:08.618057 auditctl[1600]: No rules Jan 20 00:54:08.618423 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:54:08.618709 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:54:08.621154 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:54:08.653523 augenrules[1618]: No rules Jan 20 00:54:08.655061 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:54:08.656048 sudo[1596]: pam_unix(sudo:session): session closed for user root Jan 20 00:54:08.658076 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:08.670082 systemd[1]: sshd@5-10.0.0.161:22-10.0.0.1:39208.service: Deactivated successfully. Jan 20 00:54:08.671576 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:54:08.673117 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:54:08.684944 systemd[1]: Started sshd@6-10.0.0.161:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Jan 20 00:54:08.685874 systemd-logind[1448]: Removed session 6. Jan 20 00:54:08.717986 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:54:08.719463 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:54:08.723906 systemd-logind[1448]: New session 7 of user core. Jan 20 00:54:08.739843 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:54:08.796024 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:54:08.796378 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:54:08.818014 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:54:08.841049 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:54:08.841305 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:54:09.328633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:54:09.335892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:54:09.358526 systemd[1]: Reloading requested from client PID 1674 ('systemctl') (unit session-7.scope)... Jan 20 00:54:09.358558 systemd[1]: Reloading... Jan 20 00:54:09.434086 zram_generator::config[1712]: No configuration found. Jan 20 00:54:09.583545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:54:09.651159 systemd[1]: Reloading finished in 292 ms. Jan 20 00:54:09.701355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:54:09.705249 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:54:09.705497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:54:09.707288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:54:09.857273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:54:09.862054 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:54:09.905446 kubelet[1762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:54:09.905446 kubelet[1762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:54:09.905446 kubelet[1762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:54:09.905869 kubelet[1762]: I0120 00:54:09.905448 1762 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:54:10.261960 kubelet[1762]: I0120 00:54:10.261822 1762 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 00:54:10.261960 kubelet[1762]: I0120 00:54:10.261867 1762 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:54:10.262091 kubelet[1762]: I0120 00:54:10.262062 1762 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:54:10.282732 kubelet[1762]: I0120 00:54:10.282553 1762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:54:10.293532 kubelet[1762]: E0120 00:54:10.293458 1762 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:54:10.293532 kubelet[1762]: I0120 00:54:10.293513 1762 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:54:10.300528 kubelet[1762]: I0120 00:54:10.300448 1762 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:54:10.300859 kubelet[1762]: I0120 00:54:10.300788 1762 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:54:10.301017 kubelet[1762]: I0120 00:54:10.300822 1762 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:54:10.301017 kubelet[1762]: I0120 00:54:10.300981 1762 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:54:10.301017 kubelet[1762]: I0120 00:54:10.300989 1762 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 00:54:10.301214 kubelet[1762]: I0120 00:54:10.301136 1762 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:54:10.304196 kubelet[1762]: I0120 00:54:10.304129 1762 kubelet.go:480] "Attempting to sync node with API server" Jan 20 00:54:10.304269 kubelet[1762]: I0120 00:54:10.304218 1762 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:54:10.304269 kubelet[1762]: I0120 00:54:10.304246 1762 kubelet.go:386] "Adding apiserver pod source" Jan 20 00:54:10.306058 kubelet[1762]: I0120 00:54:10.305749 1762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:54:10.306058 kubelet[1762]: E0120 00:54:10.305785 1762 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:10.306058 kubelet[1762]: E0120 00:54:10.305825 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:10.309296 kubelet[1762]: I0120 00:54:10.309270 1762 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:54:10.310014 kubelet[1762]: E0120 00:54:10.309923 1762 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:54:10.310404 kubelet[1762]: I0120 00:54:10.310362 1762 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:54:10.311082 kubelet[1762]: W0120 00:54:10.311049 1762 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:54:10.311546 kubelet[1762]: E0120 00:54:10.311505 1762 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.161\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:54:10.314896 kubelet[1762]: I0120 00:54:10.314826 1762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:54:10.314949 kubelet[1762]: I0120 00:54:10.314900 1762 server.go:1289] "Started kubelet" Jan 20 00:54:10.318789 kubelet[1762]: I0120 00:54:10.317211 1762 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:54:10.318789 kubelet[1762]: I0120 00:54:10.317323 1762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:54:10.318789 kubelet[1762]: I0120 00:54:10.318467 1762 server.go:317] "Adding debug handlers to kubelet server" Jan 20 00:54:10.319144 kubelet[1762]: I0120 00:54:10.317234 1762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:54:10.319586 kubelet[1762]: I0120 00:54:10.319524 1762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:54:10.322535 kubelet[1762]: I0120 00:54:10.321454 1762 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:54:10.322535 kubelet[1762]: I0120 00:54:10.321566 1762 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:54:10.322535 kubelet[1762]: I0120 00:54:10.321736 1762 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:54:10.322860 kubelet[1762]: I0120 00:54:10.322825 1762 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:54:10.323406 kubelet[1762]: I0120 00:54:10.323337 1762 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:54:10.323472 kubelet[1762]: I0120 00:54:10.323437 1762 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:54:10.324762 kubelet[1762]: E0120 00:54:10.323836 1762 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:54:10.324762 kubelet[1762]: E0120 00:54:10.322342 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.188c4a4d7cb3fa83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2026-01-20 00:54:10.314869379 +0000 UTC m=+0.448377459,LastTimestamp:2026-01-20 00:54:10.314869379 +0000 UTC m=+0.448377459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 20 00:54:10.325000 kubelet[1762]: E0120 00:54:10.324797 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.328301 kubelet[1762]: E0120 00:54:10.328192 1762 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:54:10.328369 kubelet[1762]: I0120 00:54:10.328322 1762 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:54:10.330036 kubelet[1762]: E0120 00:54:10.328638 1762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.161\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 20 00:54:10.335745 kubelet[1762]: E0120 00:54:10.334633 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.188c4a4d7d3ca2ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2026-01-20 00:54:10.323825338 +0000 UTC m=+0.457333398,LastTimestamp:2026-01-20 00:54:10.323825338 +0000 UTC m=+0.457333398,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 20 00:54:10.344092 kubelet[1762]: I0120 00:54:10.344033 1762 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:54:10.344092 kubelet[1762]: I0120 00:54:10.344068 1762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:54:10.344092 kubelet[1762]: I0120 00:54:10.344084 1762 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:54:10.344460 kubelet[1762]: E0120 00:54:10.344363 1762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.188c4a4d7e6337ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.161 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2026-01-20 00:54:10.343131118 +0000 UTC m=+0.476639178,LastTimestamp:2026-01-20 00:54:10.343131118 +0000 UTC m=+0.476639178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 20 00:54:10.425989 kubelet[1762]: E0120 00:54:10.425927 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.429325 kubelet[1762]: I0120 00:54:10.429250 1762 policy_none.go:49] "None policy: Start" Jan 20 00:54:10.429325 kubelet[1762]: I0120 00:54:10.429315 1762 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:54:10.429392 kubelet[1762]: I0120 00:54:10.429332 1762 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:54:10.436227 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:54:10.452502 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:54:10.457127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:54:10.469922 kubelet[1762]: E0120 00:54:10.469874 1762 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:54:10.470176 kubelet[1762]: I0120 00:54:10.470114 1762 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:54:10.470176 kubelet[1762]: I0120 00:54:10.470144 1762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:54:10.470783 kubelet[1762]: I0120 00:54:10.470339 1762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:54:10.472528 kubelet[1762]: E0120 00:54:10.472502 1762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:54:10.472974 kubelet[1762]: E0120 00:54:10.472908 1762 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.161\" not found" Jan 20 00:54:10.473636 kubelet[1762]: I0120 00:54:10.473541 1762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 00:54:10.475992 kubelet[1762]: I0120 00:54:10.475949 1762 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 00:54:10.476029 kubelet[1762]: I0120 00:54:10.476014 1762 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 00:54:10.476051 kubelet[1762]: I0120 00:54:10.476041 1762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:54:10.476069 kubelet[1762]: I0120 00:54:10.476053 1762 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 00:54:10.476304 kubelet[1762]: E0120 00:54:10.476182 1762 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 20 00:54:10.533143 kubelet[1762]: E0120 00:54:10.532969 1762 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.161\" not found" node="10.0.0.161" Jan 20 00:54:10.571394 kubelet[1762]: I0120 00:54:10.571346 1762 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.161" Jan 20 00:54:10.575276 kubelet[1762]: I0120 00:54:10.575233 1762 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.161" Jan 20 00:54:10.575276 kubelet[1762]: E0120 00:54:10.575269 1762 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.161\": node \"10.0.0.161\" not found" Jan 20 00:54:10.588091 kubelet[1762]: E0120 00:54:10.588005 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.689161 kubelet[1762]: E0120 00:54:10.689074 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.789939 kubelet[1762]: E0120 00:54:10.789790 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.890509 kubelet[1762]: E0120 00:54:10.890373 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:10.945264 sudo[1629]: pam_unix(sudo:session): session closed for user root Jan 20 00:54:10.947126 sshd[1626]: pam_unix(sshd:session): session closed for user core Jan 20 00:54:10.951190 systemd[1]: sshd@6-10.0.0.161:22-10.0.0.1:39222.service: Deactivated successfully. Jan 20 00:54:10.953076 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:54:10.953942 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:54:10.955177 systemd-logind[1448]: Removed session 7. Jan 20 00:54:10.991424 kubelet[1762]: E0120 00:54:10.991357 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:11.092793 kubelet[1762]: E0120 00:54:11.092512 1762 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 20 00:54:11.193913 kubelet[1762]: I0120 00:54:11.193880 1762 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 20 00:54:11.194233 containerd[1457]: time="2026-01-20T00:54:11.194158028Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:54:11.194771 kubelet[1762]: I0120 00:54:11.194311 1762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 20 00:54:11.264559 kubelet[1762]: I0120 00:54:11.264437 1762 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 00:54:11.264783 kubelet[1762]: I0120 00:54:11.264753 1762 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 00:54:11.264819 kubelet[1762]: I0120 00:54:11.264793 1762 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 20 00:54:11.306028 kubelet[1762]: E0120 00:54:11.305977 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:11.306156 kubelet[1762]: I0120 00:54:11.306036 1762 apiserver.go:52] "Watching apiserver" Jan 20 00:54:11.322284 kubelet[1762]: E0120 00:54:11.322009 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:11.322284 kubelet[1762]: I0120 00:54:11.322269 1762 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:54:11.328092 kubelet[1762]: I0120 00:54:11.328062 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2dd3624e-1b07-414b-a9bd-60cfa9166e47-kube-proxy\") pod \"kube-proxy-wvxwq\" (UID: \"2dd3624e-1b07-414b-a9bd-60cfa9166e47\") " pod="kube-system/kube-proxy-wvxwq" Jan 20 00:54:11.328147 kubelet[1762]: I0120 00:54:11.328099 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94c9b\" (UniqueName: \"kubernetes.io/projected/2dd3624e-1b07-414b-a9bd-60cfa9166e47-kube-api-access-94c9b\") pod \"kube-proxy-wvxwq\" (UID: \"2dd3624e-1b07-414b-a9bd-60cfa9166e47\") " pod="kube-system/kube-proxy-wvxwq" Jan 20 00:54:11.328147 kubelet[1762]: I0120 00:54:11.328117 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-cni-net-dir\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328147 kubelet[1762]: I0120 00:54:11.328132 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d72e8a74-e159-43ff-93b6-7b0f49444e5c-node-certs\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328147 kubelet[1762]: I0120 00:54:11.328146 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-var-lib-calico\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328237 kubelet[1762]: I0120 00:54:11.328159 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-var-run-calico\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328237 kubelet[1762]: I0120 00:54:11.328172 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mnh\" (UniqueName: \"kubernetes.io/projected/cabaf7fc-f028-4962-a01c-5241ddd73130-kube-api-access-h6mnh\") pod \"csi-node-driver-dtc4t\" (UID: \"cabaf7fc-f028-4962-a01c-5241ddd73130\") " pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:11.328237 kubelet[1762]: I0120 00:54:11.328186 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dd3624e-1b07-414b-a9bd-60cfa9166e47-xtables-lock\") pod \"kube-proxy-wvxwq\" (UID: \"2dd3624e-1b07-414b-a9bd-60cfa9166e47\") " pod="kube-system/kube-proxy-wvxwq" Jan 20 00:54:11.328295 kubelet[1762]: I0120 00:54:11.328236 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dd3624e-1b07-414b-a9bd-60cfa9166e47-lib-modules\") pod \"kube-proxy-wvxwq\" (UID: \"2dd3624e-1b07-414b-a9bd-60cfa9166e47\") " pod="kube-system/kube-proxy-wvxwq" Jan 20 00:54:11.328295 kubelet[1762]: I0120 00:54:11.328250 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d72e8a74-e159-43ff-93b6-7b0f49444e5c-tigera-ca-bundle\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328337 kubelet[1762]: I0120 00:54:11.328311 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cabaf7fc-f028-4962-a01c-5241ddd73130-kubelet-dir\") pod \"csi-node-driver-dtc4t\" (UID: \"cabaf7fc-f028-4962-a01c-5241ddd73130\") " pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:11.328362 kubelet[1762]: I0120 00:54:11.328346 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cabaf7fc-f028-4962-a01c-5241ddd73130-registration-dir\") pod \"csi-node-driver-dtc4t\" (UID: \"cabaf7fc-f028-4962-a01c-5241ddd73130\") " pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:11.328387 kubelet[1762]: I0120 00:54:11.328376 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cabaf7fc-f028-4962-a01c-5241ddd73130-varrun\") pod \"csi-node-driver-dtc4t\" (UID: \"cabaf7fc-f028-4962-a01c-5241ddd73130\") " pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:11.328411 kubelet[1762]: I0120 00:54:11.328400 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-cni-log-dir\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328433 kubelet[1762]: I0120 00:54:11.328421 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-lib-modules\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328537 kubelet[1762]: I0120 00:54:11.328443 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grz7g\" (UniqueName: \"kubernetes.io/projected/d72e8a74-e159-43ff-93b6-7b0f49444e5c-kube-api-access-grz7g\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328537 kubelet[1762]: I0120 00:54:11.328464 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cabaf7fc-f028-4962-a01c-5241ddd73130-socket-dir\") pod \"csi-node-driver-dtc4t\" (UID: \"cabaf7fc-f028-4962-a01c-5241ddd73130\") " pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:11.328537 kubelet[1762]: I0120 00:54:11.328484 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-cni-bin-dir\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328537 kubelet[1762]: I0120 00:54:11.328504 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-flexvol-driver-host\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328537 kubelet[1762]: I0120 00:54:11.328524 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-policysync\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.329135 kubelet[1762]: I0120 00:54:11.328544 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d72e8a74-e159-43ff-93b6-7b0f49444e5c-xtables-lock\") pod \"calico-node-5rd8d\" (UID: \"d72e8a74-e159-43ff-93b6-7b0f49444e5c\") " pod="calico-system/calico-node-5rd8d" Jan 20 00:54:11.328596 systemd[1]: Created slice kubepods-besteffort-podd72e8a74_e159_43ff_93b6_7b0f49444e5c.slice - libcontainer container kubepods-besteffort-podd72e8a74_e159_43ff_93b6_7b0f49444e5c.slice. Jan 20 00:54:11.346320 systemd[1]: Created slice kubepods-besteffort-pod2dd3624e_1b07_414b_a9bd_60cfa9166e47.slice - libcontainer container kubepods-besteffort-pod2dd3624e_1b07_414b_a9bd_60cfa9166e47.slice. Jan 20 00:54:11.430201 kubelet[1762]: E0120 00:54:11.430162 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.430201 kubelet[1762]: W0120 00:54:11.430192 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.430430 kubelet[1762]: E0120 00:54:11.430225 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.430513 kubelet[1762]: E0120 00:54:11.430495 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.430513 kubelet[1762]: W0120 00:54:11.430503 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.430513 kubelet[1762]: E0120 00:54:11.430511 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.430854 kubelet[1762]: E0120 00:54:11.430825 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.430854 kubelet[1762]: W0120 00:54:11.430853 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.430965 kubelet[1762]: E0120 00:54:11.430872 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.431380 kubelet[1762]: E0120 00:54:11.431340 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.431380 kubelet[1762]: W0120 00:54:11.431365 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.431380 kubelet[1762]: E0120 00:54:11.431374 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.431949 kubelet[1762]: E0120 00:54:11.431909 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.431949 kubelet[1762]: W0120 00:54:11.431931 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.431949 kubelet[1762]: E0120 00:54:11.431939 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.432258 kubelet[1762]: E0120 00:54:11.432235 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.432258 kubelet[1762]: W0120 00:54:11.432256 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.432315 kubelet[1762]: E0120 00:54:11.432267 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.432737 kubelet[1762]: E0120 00:54:11.432643 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.432783 kubelet[1762]: W0120 00:54:11.432742 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.432783 kubelet[1762]: E0120 00:54:11.432763 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.433091 kubelet[1762]: E0120 00:54:11.433068 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.433091 kubelet[1762]: W0120 00:54:11.433088 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.433150 kubelet[1762]: E0120 00:54:11.433096 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.433528 kubelet[1762]: E0120 00:54:11.433497 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.433561 kubelet[1762]: W0120 00:54:11.433531 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.433561 kubelet[1762]: E0120 00:54:11.433548 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.434051 kubelet[1762]: E0120 00:54:11.434014 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.434051 kubelet[1762]: W0120 00:54:11.434035 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.434051 kubelet[1762]: E0120 00:54:11.434044 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.434466 kubelet[1762]: E0120 00:54:11.434400 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.434466 kubelet[1762]: W0120 00:54:11.434438 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.434466 kubelet[1762]: E0120 00:54:11.434452 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.434941 kubelet[1762]: E0120 00:54:11.434909 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.434941 kubelet[1762]: W0120 00:54:11.434937 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.435033 kubelet[1762]: E0120 00:54:11.434947 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.435304 kubelet[1762]: E0120 00:54:11.435268 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.435304 kubelet[1762]: W0120 00:54:11.435282 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.435304 kubelet[1762]: E0120 00:54:11.435290 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.435637 kubelet[1762]: E0120 00:54:11.435590 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.435637 kubelet[1762]: W0120 00:54:11.435631 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.435802 kubelet[1762]: E0120 00:54:11.435640 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.442931 kubelet[1762]: E0120 00:54:11.442888 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.442931 kubelet[1762]: W0120 00:54:11.442920 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.442931 kubelet[1762]: E0120 00:54:11.442933 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.443266 kubelet[1762]: E0120 00:54:11.443223 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.443266 kubelet[1762]: W0120 00:54:11.443254 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.443266 kubelet[1762]: E0120 00:54:11.443263 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.443583 kubelet[1762]: E0120 00:54:11.443537 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.443583 kubelet[1762]: W0120 00:54:11.443567 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.443583 kubelet[1762]: E0120 00:54:11.443575 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.444354 kubelet[1762]: E0120 00:54:11.444304 1762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:54:11.444354 kubelet[1762]: W0120 00:54:11.444338 1762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:54:11.444354 kubelet[1762]: E0120 00:54:11.444347 1762 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:54:11.646226 kubelet[1762]: E0120 00:54:11.646064 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:11.647311 containerd[1457]: time="2026-01-20T00:54:11.647253032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rd8d,Uid:d72e8a74-e159-43ff-93b6-7b0f49444e5c,Namespace:calico-system,Attempt:0,}" Jan 20 00:54:11.649857 kubelet[1762]: E0120 00:54:11.649794 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:11.650272 containerd[1457]: time="2026-01-20T00:54:11.650205765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvxwq,Uid:2dd3624e-1b07-414b-a9bd-60cfa9166e47,Namespace:kube-system,Attempt:0,}" Jan 20 00:54:12.306227 kubelet[1762]: E0120 00:54:12.306128 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:12.337378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514176083.mount: Deactivated successfully. Jan 20 00:54:12.346493 containerd[1457]: time="2026-01-20T00:54:12.346403400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:54:12.348743 containerd[1457]: time="2026-01-20T00:54:12.348654982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:54:12.352450 containerd[1457]: time="2026-01-20T00:54:12.350331946Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:54:12.352513 containerd[1457]: time="2026-01-20T00:54:12.352468706Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:54:12.353839 containerd[1457]: time="2026-01-20T00:54:12.353779657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:54:12.355555 containerd[1457]: time="2026-01-20T00:54:12.355520395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:54:12.358269 containerd[1457]: time="2026-01-20T00:54:12.358199226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.902802ms" Jan 20 00:54:12.358920 containerd[1457]: time="2026-01-20T00:54:12.358849604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 711.459987ms" Jan 20 00:54:12.451129 containerd[1457]: time="2026-01-20T00:54:12.450929940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.451129 containerd[1457]: time="2026-01-20T00:54:12.451014858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.451129 containerd[1457]: time="2026-01-20T00:54:12.451048902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.451471 containerd[1457]: time="2026-01-20T00:54:12.451161166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:12.451471 containerd[1457]: time="2026-01-20T00:54:12.451259380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:12.451471 containerd[1457]: time="2026-01-20T00:54:12.451270010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.451471 containerd[1457]: time="2026-01-20T00:54:12.451289141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.451819 containerd[1457]: time="2026-01-20T00:54:12.451761608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:12.509883 systemd[1]: Started cri-containerd-68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d.scope - libcontainer container 68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d. Jan 20 00:54:12.511406 systemd[1]: Started cri-containerd-b3021740098948f0a998472f6af3f286a34f7065b81a574f560c83dba9707ca1.scope - libcontainer container b3021740098948f0a998472f6af3f286a34f7065b81a574f560c83dba9707ca1. Jan 20 00:54:12.538490 containerd[1457]: time="2026-01-20T00:54:12.538431623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5rd8d,Uid:d72e8a74-e159-43ff-93b6-7b0f49444e5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\"" Jan 20 00:54:12.539868 kubelet[1762]: E0120 00:54:12.539803 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:12.544979 containerd[1457]: time="2026-01-20T00:54:12.544934784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:54:12.547376 containerd[1457]: time="2026-01-20T00:54:12.547290651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wvxwq,Uid:2dd3624e-1b07-414b-a9bd-60cfa9166e47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3021740098948f0a998472f6af3f286a34f7065b81a574f560c83dba9707ca1\"" Jan 20 00:54:12.548167 kubelet[1762]: E0120 00:54:12.548101 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:13.028387 containerd[1457]: time="2026-01-20T00:54:13.028308245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:13.029319 containerd[1457]: time="2026-01-20T00:54:13.029267987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 20 00:54:13.030265 containerd[1457]: time="2026-01-20T00:54:13.030234262Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:13.032563 containerd[1457]: time="2026-01-20T00:54:13.032481828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:13.033121 containerd[1457]: time="2026-01-20T00:54:13.033063485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 488.076643ms" Jan 20 00:54:13.033121 containerd[1457]: time="2026-01-20T00:54:13.033104141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:54:13.034448 containerd[1457]: time="2026-01-20T00:54:13.034350167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 00:54:13.037180 containerd[1457]: time="2026-01-20T00:54:13.037146407Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:54:13.052098 containerd[1457]: time="2026-01-20T00:54:13.052064536Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460\"" Jan 20 00:54:13.052817 containerd[1457]: time="2026-01-20T00:54:13.052675340Z" level=info msg="StartContainer for \"56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460\"" Jan 20 00:54:13.080852 systemd[1]: Started cri-containerd-56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460.scope - libcontainer container 56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460. Jan 20 00:54:13.110565 containerd[1457]: time="2026-01-20T00:54:13.110522576Z" level=info msg="StartContainer for \"56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460\" returns successfully" Jan 20 00:54:13.118178 systemd[1]: cri-containerd-56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460.scope: Deactivated successfully. Jan 20 00:54:13.161990 containerd[1457]: time="2026-01-20T00:54:13.161879246Z" level=info msg="shim disconnected" id=56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460 namespace=k8s.io Jan 20 00:54:13.161990 containerd[1457]: time="2026-01-20T00:54:13.161948045Z" level=warning msg="cleaning up after shim disconnected" id=56d2ede982753ea6aa68d6e7c3d8437d18ac8af6b5a1fa6113cc0fb138432460 namespace=k8s.io Jan 20 00:54:13.161990 containerd[1457]: time="2026-01-20T00:54:13.161967040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:54:13.307517 kubelet[1762]: E0120 00:54:13.307307 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:13.439020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516377431.mount: Deactivated successfully. Jan 20 00:54:13.477243 kubelet[1762]: E0120 00:54:13.477196 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:13.490752 kubelet[1762]: E0120 00:54:13.490552 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:13.875271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295732205.mount: Deactivated successfully. Jan 20 00:54:14.192949 containerd[1457]: time="2026-01-20T00:54:14.192807401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:14.193553 containerd[1457]: time="2026-01-20T00:54:14.193500120Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 00:54:14.194650 containerd[1457]: time="2026-01-20T00:54:14.194592770Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:14.197026 containerd[1457]: time="2026-01-20T00:54:14.196980855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:14.197757 containerd[1457]: time="2026-01-20T00:54:14.197669792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.1632474s" Jan 20 00:54:14.197757 containerd[1457]: time="2026-01-20T00:54:14.197747758Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 00:54:14.199150 containerd[1457]: time="2026-01-20T00:54:14.199108500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:54:14.201856 containerd[1457]: time="2026-01-20T00:54:14.201787470Z" level=info msg="CreateContainer within sandbox \"b3021740098948f0a998472f6af3f286a34f7065b81a574f560c83dba9707ca1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:54:14.220094 containerd[1457]: time="2026-01-20T00:54:14.220022905Z" level=info msg="CreateContainer within sandbox \"b3021740098948f0a998472f6af3f286a34f7065b81a574f560c83dba9707ca1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4af56043dcf8dab242164b339252e3fd7f4a34f280cd746ce9264edf8124ddc\"" Jan 20 00:54:14.220729 containerd[1457]: time="2026-01-20T00:54:14.220667210Z" level=info msg="StartContainer for \"e4af56043dcf8dab242164b339252e3fd7f4a34f280cd746ce9264edf8124ddc\"" Jan 20 00:54:14.258995 systemd[1]: Started cri-containerd-e4af56043dcf8dab242164b339252e3fd7f4a34f280cd746ce9264edf8124ddc.scope - libcontainer container e4af56043dcf8dab242164b339252e3fd7f4a34f280cd746ce9264edf8124ddc. Jan 20 00:54:14.288443 containerd[1457]: time="2026-01-20T00:54:14.288359092Z" level=info msg="StartContainer for \"e4af56043dcf8dab242164b339252e3fd7f4a34f280cd746ce9264edf8124ddc\" returns successfully" Jan 20 00:54:14.308223 kubelet[1762]: E0120 00:54:14.308170 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:14.495925 kubelet[1762]: E0120 00:54:14.495826 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:14.505163 kubelet[1762]: I0120 00:54:14.505051 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wvxwq" podStartSLOduration=2.855231798 podStartE2EDuration="4.505039503s" podCreationTimestamp="2026-01-20 00:54:10 +0000 UTC" firstStartedPulling="2026-01-20 00:54:12.548661183 +0000 UTC m=+2.682169243" lastFinishedPulling="2026-01-20 00:54:14.198468888 +0000 UTC m=+4.331976948" observedRunningTime="2026-01-20 00:54:14.504831605 +0000 UTC m=+4.638339675" watchObservedRunningTime="2026-01-20 00:54:14.505039503 +0000 UTC m=+4.638547563" Jan 20 00:54:15.308913 kubelet[1762]: E0120 00:54:15.308829 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:15.476861 kubelet[1762]: E0120 00:54:15.476811 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:15.497441 kubelet[1762]: E0120 00:54:15.497369 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:15.649313 containerd[1457]: time="2026-01-20T00:54:15.649158171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:15.650166 containerd[1457]: time="2026-01-20T00:54:15.650123664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:54:15.651372 containerd[1457]: time="2026-01-20T00:54:15.651321250Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:15.653794 containerd[1457]: time="2026-01-20T00:54:15.653760320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:15.654592 containerd[1457]: time="2026-01-20T00:54:15.654536941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.455387232s" Jan 20 00:54:15.654592 containerd[1457]: time="2026-01-20T00:54:15.654578989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:54:15.658914 containerd[1457]: time="2026-01-20T00:54:15.658873122Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:54:15.673968 containerd[1457]: time="2026-01-20T00:54:15.673902786Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7\"" Jan 20 00:54:15.674336 containerd[1457]: time="2026-01-20T00:54:15.674300649Z" level=info msg="StartContainer for \"39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7\"" Jan 20 00:54:15.709845 systemd[1]: Started cri-containerd-39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7.scope - libcontainer container 39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7. Jan 20 00:54:15.737949 containerd[1457]: time="2026-01-20T00:54:15.737890606Z" level=info msg="StartContainer for \"39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7\" returns successfully" Jan 20 00:54:16.305425 containerd[1457]: time="2026-01-20T00:54:16.305369234Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:54:16.308093 systemd[1]: cri-containerd-39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7.scope: Deactivated successfully. Jan 20 00:54:16.309464 kubelet[1762]: E0120 00:54:16.309402 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:16.406549 kubelet[1762]: I0120 00:54:16.406455 1762 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:54:16.461732 containerd[1457]: time="2026-01-20T00:54:16.461599373Z" level=info msg="shim disconnected" id=39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7 namespace=k8s.io Jan 20 00:54:16.461732 containerd[1457]: time="2026-01-20T00:54:16.461725048Z" level=warning msg="cleaning up after shim disconnected" id=39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7 namespace=k8s.io Jan 20 00:54:16.461732 containerd[1457]: time="2026-01-20T00:54:16.461736109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:54:16.500047 kubelet[1762]: E0120 00:54:16.500013 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:16.501045 containerd[1457]: time="2026-01-20T00:54:16.500936910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:54:16.669839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39af81dbf7958c30abbc247b027f9640de983f312f41588d080d6d6b28d769e7-rootfs.mount: Deactivated successfully. Jan 20 00:54:16.740547 systemd[1]: Created slice kubepods-besteffort-pod602e1611_59a6_40f3_9181_fcec1768be11.slice - libcontainer container kubepods-besteffort-pod602e1611_59a6_40f3_9181_fcec1768be11.slice. Jan 20 00:54:16.766020 kubelet[1762]: I0120 00:54:16.765954 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6tf\" (UniqueName: \"kubernetes.io/projected/602e1611-59a6-40f3-9181-fcec1768be11-kube-api-access-sp6tf\") pod \"nginx-deployment-7fcdb87857-mz7sh\" (UID: \"602e1611-59a6-40f3-9181-fcec1768be11\") " pod="default/nginx-deployment-7fcdb87857-mz7sh" Jan 20 00:54:17.046745 containerd[1457]: time="2026-01-20T00:54:17.046543067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mz7sh,Uid:602e1611-59a6-40f3-9181-fcec1768be11,Namespace:default,Attempt:0,}" Jan 20 00:54:17.136168 containerd[1457]: time="2026-01-20T00:54:17.136069019Z" level=error msg="Failed to destroy network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.138315 containerd[1457]: time="2026-01-20T00:54:17.136660224Z" level=error msg="encountered an error cleaning up failed sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.138315 containerd[1457]: time="2026-01-20T00:54:17.136753448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mz7sh,Uid:602e1611-59a6-40f3-9181-fcec1768be11,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.137588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443-shm.mount: Deactivated successfully. Jan 20 00:54:17.138505 kubelet[1762]: E0120 00:54:17.137013 1762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.138505 kubelet[1762]: E0120 00:54:17.137070 1762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-mz7sh" Jan 20 00:54:17.138505 kubelet[1762]: E0120 00:54:17.137090 1762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-mz7sh" Jan 20 00:54:17.138584 kubelet[1762]: E0120 00:54:17.137133 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-mz7sh_default(602e1611-59a6-40f3-9181-fcec1768be11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-mz7sh_default(602e1611-59a6-40f3-9181-fcec1768be11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-mz7sh" podUID="602e1611-59a6-40f3-9181-fcec1768be11" Jan 20 00:54:17.310415 kubelet[1762]: E0120 00:54:17.310261 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:17.482889 systemd[1]: Created slice kubepods-besteffort-podcabaf7fc_f028_4962_a01c_5241ddd73130.slice - libcontainer container kubepods-besteffort-podcabaf7fc_f028_4962_a01c_5241ddd73130.slice. Jan 20 00:54:17.485647 containerd[1457]: time="2026-01-20T00:54:17.485433944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtc4t,Uid:cabaf7fc-f028-4962-a01c-5241ddd73130,Namespace:calico-system,Attempt:0,}" Jan 20 00:54:17.502003 kubelet[1762]: I0120 00:54:17.501609 1762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:17.502282 containerd[1457]: time="2026-01-20T00:54:17.502232744Z" level=info msg="StopPodSandbox for \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\"" Jan 20 00:54:17.502429 containerd[1457]: time="2026-01-20T00:54:17.502392893Z" level=info msg="Ensure that sandbox 34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443 in task-service has been cleanup successfully" Jan 20 00:54:17.531827 containerd[1457]: time="2026-01-20T00:54:17.531762057Z" level=error msg="StopPodSandbox for \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\" failed" error="failed to destroy network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.532040 kubelet[1762]: E0120 00:54:17.531971 1762 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:17.532148 kubelet[1762]: E0120 00:54:17.532023 1762 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443"} Jan 20 00:54:17.532148 kubelet[1762]: E0120 00:54:17.532074 1762 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"602e1611-59a6-40f3-9181-fcec1768be11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:54:17.532148 kubelet[1762]: E0120 00:54:17.532097 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"602e1611-59a6-40f3-9181-fcec1768be11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-mz7sh" podUID="602e1611-59a6-40f3-9181-fcec1768be11" Jan 20 00:54:17.587306 containerd[1457]: time="2026-01-20T00:54:17.586914845Z" level=error msg="Failed to destroy network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.587398 containerd[1457]: time="2026-01-20T00:54:17.587372029Z" level=error msg="encountered an error cleaning up failed sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.587427 containerd[1457]: time="2026-01-20T00:54:17.587410491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtc4t,Uid:cabaf7fc-f028-4962-a01c-5241ddd73130,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.588136 kubelet[1762]: E0120 00:54:17.587763 1762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:17.588136 kubelet[1762]: E0120 00:54:17.587810 1762 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:17.588136 kubelet[1762]: E0120 00:54:17.587830 1762 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtc4t" Jan 20 00:54:17.588230 kubelet[1762]: E0120 00:54:17.587920 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:18.311318 kubelet[1762]: E0120 00:54:18.311240 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:18.504601 kubelet[1762]: I0120 00:54:18.504570 1762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:18.505607 containerd[1457]: time="2026-01-20T00:54:18.505466710Z" level=info msg="StopPodSandbox for \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\"" Jan 20 00:54:18.506144 containerd[1457]: time="2026-01-20T00:54:18.506110873Z" level=info msg="Ensure that sandbox cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610 in task-service has been cleanup successfully" Jan 20 00:54:18.532171 containerd[1457]: time="2026-01-20T00:54:18.532076970Z" level=error msg="StopPodSandbox for \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\" failed" error="failed to destroy network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:54:18.532438 kubelet[1762]: E0120 00:54:18.532391 1762 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:18.532487 kubelet[1762]: E0120 00:54:18.532445 1762 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610"} Jan 20 00:54:18.532487 kubelet[1762]: E0120 00:54:18.532474 1762 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cabaf7fc-f028-4962-a01c-5241ddd73130\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:54:18.532576 kubelet[1762]: E0120 00:54:18.532494 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cabaf7fc-f028-4962-a01c-5241ddd73130\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:19.312465 kubelet[1762]: E0120 00:54:19.312412 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:19.400160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354082476.mount: Deactivated successfully. Jan 20 00:54:19.548988 containerd[1457]: time="2026-01-20T00:54:19.548889424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:19.549954 containerd[1457]: time="2026-01-20T00:54:19.549895634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:54:19.550887 containerd[1457]: time="2026-01-20T00:54:19.550837603Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:19.553161 containerd[1457]: time="2026-01-20T00:54:19.553108468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:19.553605 containerd[1457]: time="2026-01-20T00:54:19.553549978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.052566059s" Jan 20 00:54:19.553605 containerd[1457]: time="2026-01-20T00:54:19.553594120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:54:19.566207 containerd[1457]: time="2026-01-20T00:54:19.566028854Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:54:19.581986 containerd[1457]: time="2026-01-20T00:54:19.581925880Z" level=info msg="CreateContainer within sandbox \"68790331e103e122cdda33f4fa36bc4b4cede03f45ffc63382491e8ffd59da9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a75892196964eb9845598a3857dd1f5c56562ad7d8bc00d4dff9fdcd902f185b\"" Jan 20 00:54:19.582490 containerd[1457]: time="2026-01-20T00:54:19.582456921Z" level=info msg="StartContainer for \"a75892196964eb9845598a3857dd1f5c56562ad7d8bc00d4dff9fdcd902f185b\"" Jan 20 00:54:19.619871 systemd[1]: Started cri-containerd-a75892196964eb9845598a3857dd1f5c56562ad7d8bc00d4dff9fdcd902f185b.scope - libcontainer container a75892196964eb9845598a3857dd1f5c56562ad7d8bc00d4dff9fdcd902f185b. Jan 20 00:54:19.650762 containerd[1457]: time="2026-01-20T00:54:19.650716656Z" level=info msg="StartContainer for \"a75892196964eb9845598a3857dd1f5c56562ad7d8bc00d4dff9fdcd902f185b\" returns successfully" Jan 20 00:54:19.743047 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:54:19.743164 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:54:20.313052 kubelet[1762]: E0120 00:54:20.312964 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:20.510381 kubelet[1762]: E0120 00:54:20.510320 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:21.137774 kernel: bpftool[2542]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:54:21.314177 kubelet[1762]: E0120 00:54:21.314106 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:21.354081 systemd-networkd[1386]: vxlan.calico: Link UP Jan 20 00:54:21.354103 systemd-networkd[1386]: vxlan.calico: Gained carrier Jan 20 00:54:22.314700 kubelet[1762]: E0120 00:54:22.314529 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:22.971008 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jan 20 00:54:23.315557 kubelet[1762]: E0120 00:54:23.315339 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:24.316517 kubelet[1762]: E0120 00:54:24.316418 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:25.316918 kubelet[1762]: E0120 00:54:25.316833 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:26.318007 kubelet[1762]: E0120 00:54:26.317892 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:27.318829 kubelet[1762]: E0120 00:54:27.318747 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:28.319619 kubelet[1762]: E0120 00:54:28.319544 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:29.319833 kubelet[1762]: E0120 00:54:29.319715 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:30.304908 kubelet[1762]: E0120 00:54:30.304772 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:30.320872 kubelet[1762]: E0120 00:54:30.320767 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:30.479476 containerd[1457]: time="2026-01-20T00:54:30.479329115Z" level=info msg="StopPodSandbox for \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\"" Jan 20 00:54:30.525759 kubelet[1762]: I0120 00:54:30.525341 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5rd8d" podStartSLOduration=13.512875038 podStartE2EDuration="20.525323504s" podCreationTimestamp="2026-01-20 00:54:10 +0000 UTC" firstStartedPulling="2026-01-20 00:54:12.541879439 +0000 UTC m=+2.675387499" lastFinishedPulling="2026-01-20 00:54:19.554327905 +0000 UTC m=+9.687835965" observedRunningTime="2026-01-20 00:54:20.52628983 +0000 UTC m=+10.659797900" watchObservedRunningTime="2026-01-20 00:54:30.525323504 +0000 UTC m=+20.658831564" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.525 [INFO][2634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.525 [INFO][2634] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" iface="eth0" netns="/var/run/netns/cni-d60cd2d1-fe2d-6bb7-878c-75f9e860dfe0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.525 [INFO][2634] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" iface="eth0" netns="/var/run/netns/cni-d60cd2d1-fe2d-6bb7-878c-75f9e860dfe0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.526 [INFO][2634] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" iface="eth0" netns="/var/run/netns/cni-d60cd2d1-fe2d-6bb7-878c-75f9e860dfe0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.526 [INFO][2634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.526 [INFO][2634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.548 [INFO][2643] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" HandleID="k8s-pod-network.cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.549 [INFO][2643] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.549 [INFO][2643] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.558 [WARNING][2643] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" HandleID="k8s-pod-network.cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.558 [INFO][2643] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" HandleID="k8s-pod-network.cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.560 [INFO][2643] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:30.567140 containerd[1457]: 2026-01-20 00:54:30.564 [INFO][2634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610" Jan 20 00:54:30.568823 systemd[1]: run-netns-cni\x2dd60cd2d1\x2dfe2d\x2d6bb7\x2d878c\x2d75f9e860dfe0.mount: Deactivated successfully. Jan 20 00:54:30.569479 containerd[1457]: time="2026-01-20T00:54:30.569441076Z" level=info msg="TearDown network for sandbox \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\" successfully" Jan 20 00:54:30.569555 containerd[1457]: time="2026-01-20T00:54:30.569479158Z" level=info msg="StopPodSandbox for \"cfd79e21e63d32b628d9ff36d1ca55addcd6e1005c578ff100ea80d36b937610\" returns successfully" Jan 20 00:54:30.570203 containerd[1457]: time="2026-01-20T00:54:30.570167953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtc4t,Uid:cabaf7fc-f028-4962-a01c-5241ddd73130,Namespace:calico-system,Attempt:1,}" Jan 20 00:54:30.680538 systemd-networkd[1386]: cali3e76a0bee1a: Link UP Jan 20 00:54:30.680876 systemd-networkd[1386]: cali3e76a0bee1a: Gained carrier Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.615 [INFO][2652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.161-k8s-csi--node--driver--dtc4t-eth0 csi-node-driver- calico-system cabaf7fc-f028-4962-a01c-5241ddd73130 1239 0 2026-01-20 00:54:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.161 csi-node-driver-dtc4t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e76a0bee1a [] [] }} ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.615 [INFO][2652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.642 [INFO][2666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" HandleID="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.642 [INFO][2666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" HandleID="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edf0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.161", "pod":"csi-node-driver-dtc4t", "timestamp":"2026-01-20 00:54:30.642087797 +0000 UTC"}, Hostname:"10.0.0.161", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.642 [INFO][2666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.642 [INFO][2666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.642 [INFO][2666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.161' Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.650 [INFO][2666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.655 [INFO][2666] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.660 [INFO][2666] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.662 [INFO][2666] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.665 [INFO][2666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.665 [INFO][2666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.666 [INFO][2666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.670 [INFO][2666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.674 [INFO][2666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.1/26] block=192.168.64.0/26 handle="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.674 [INFO][2666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.1/26] handle="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" host="10.0.0.161" Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.674 [INFO][2666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:30.693922 containerd[1457]: 2026-01-20 00:54:30.674 [INFO][2666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.1/26] IPv6=[] ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" HandleID="k8s-pod-network.11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Workload="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.677 [INFO][2652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-csi--node--driver--dtc4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cabaf7fc-f028-4962-a01c-5241ddd73130", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"", Pod:"csi-node-driver-dtc4t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e76a0bee1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.677 [INFO][2652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.1/32] ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.677 [INFO][2652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e76a0bee1a ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.680 [INFO][2652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.681 [INFO][2652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-csi--node--driver--dtc4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cabaf7fc-f028-4962-a01c-5241ddd73130", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a", Pod:"csi-node-driver-dtc4t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e76a0bee1a", MAC:"5e:2f:ad:f1:f8:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:30.694457 containerd[1457]: 2026-01-20 00:54:30.690 [INFO][2652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a" Namespace="calico-system" Pod="csi-node-driver-dtc4t" WorkloadEndpoint="10.0.0.161-k8s-csi--node--driver--dtc4t-eth0" Jan 20 00:54:30.715001 containerd[1457]: time="2026-01-20T00:54:30.714918409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:30.715113 containerd[1457]: time="2026-01-20T00:54:30.714976467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:30.715113 containerd[1457]: time="2026-01-20T00:54:30.714988570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:30.715113 containerd[1457]: time="2026-01-20T00:54:30.715055295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:30.737877 systemd[1]: Started cri-containerd-11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a.scope - libcontainer container 11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a. Jan 20 00:54:30.748494 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:30.760613 containerd[1457]: time="2026-01-20T00:54:30.760544457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtc4t,Uid:cabaf7fc-f028-4962-a01c-5241ddd73130,Namespace:calico-system,Attempt:1,} returns sandbox id \"11d92d000dcd52d3a9e8f16a75582d7d1824d75c5374a788385b5b68513ee20a\"" Jan 20 00:54:30.762341 containerd[1457]: time="2026-01-20T00:54:30.762309994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:54:30.827317 containerd[1457]: time="2026-01-20T00:54:30.827175801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:30.828813 containerd[1457]: time="2026-01-20T00:54:30.828575740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:54:30.828813 containerd[1457]: time="2026-01-20T00:54:30.828784663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:54:30.829121 kubelet[1762]: E0120 00:54:30.829048 1762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:30.829121 kubelet[1762]: E0120 00:54:30.829116 1762 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:30.829335 kubelet[1762]: E0120 00:54:30.829256 1762 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6mnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:30.831486 containerd[1457]: time="2026-01-20T00:54:30.831379065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:54:30.888483 containerd[1457]: time="2026-01-20T00:54:30.888357536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:30.889915 containerd[1457]: time="2026-01-20T00:54:30.889840334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:54:30.889972 containerd[1457]: time="2026-01-20T00:54:30.889900666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:54:30.890080 kubelet[1762]: E0120 00:54:30.890020 1762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:30.890080 kubelet[1762]: E0120 00:54:30.890063 1762 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:30.890224 kubelet[1762]: E0120 00:54:30.890158 1762 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6mnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:30.892379 kubelet[1762]: E0120 00:54:30.892292 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:31.321803 kubelet[1762]: E0120 00:54:31.321570 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:31.477788 containerd[1457]: time="2026-01-20T00:54:31.477713887Z" level=info msg="StopPodSandbox for \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\"" Jan 20 00:54:31.532457 kubelet[1762]: E0120 00:54:31.532389 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.518 [INFO][2741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.518 [INFO][2741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" iface="eth0" netns="/var/run/netns/cni-3ea89506-1791-c6a2-930d-f8c8ce085839" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.519 [INFO][2741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" iface="eth0" netns="/var/run/netns/cni-3ea89506-1791-c6a2-930d-f8c8ce085839" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.519 [INFO][2741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" iface="eth0" netns="/var/run/netns/cni-3ea89506-1791-c6a2-930d-f8c8ce085839" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.519 [INFO][2741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.519 [INFO][2741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.542 [INFO][2749] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" HandleID="k8s-pod-network.34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.542 [INFO][2749] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.542 [INFO][2749] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.548 [WARNING][2749] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" HandleID="k8s-pod-network.34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.548 [INFO][2749] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" HandleID="k8s-pod-network.34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.550 [INFO][2749] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:31.554932 containerd[1457]: 2026-01-20 00:54:31.552 [INFO][2741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443" Jan 20 00:54:31.555557 containerd[1457]: time="2026-01-20T00:54:31.555174524Z" level=info msg="TearDown network for sandbox \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\" successfully" Jan 20 00:54:31.555557 containerd[1457]: time="2026-01-20T00:54:31.555202116Z" level=info msg="StopPodSandbox for \"34e3f151dfcc07295df82acbca5b3661fa51194f07c8ea911fa196bcdb975443\" returns successfully" Jan 20 00:54:31.555873 containerd[1457]: time="2026-01-20T00:54:31.555822266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mz7sh,Uid:602e1611-59a6-40f3-9181-fcec1768be11,Namespace:default,Attempt:1,}" Jan 20 00:54:31.570044 systemd[1]: run-netns-cni\x2d3ea89506\x2d1791\x2dc6a2\x2d930d\x2df8c8ce085839.mount: Deactivated successfully. Jan 20 00:54:31.672934 systemd-networkd[1386]: cali3cdc56cb06e: Link UP Jan 20 00:54:31.673398 systemd-networkd[1386]: cali3cdc56cb06e: Gained carrier Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.603 [INFO][2758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0 nginx-deployment-7fcdb87857- default 602e1611-59a6-40f3-9181-fcec1768be11 1253 0 2026-01-20 00:54:16 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.161 nginx-deployment-7fcdb87857-mz7sh eth0 default [] [] [kns.default ksa.default.default] cali3cdc56cb06e [] [] }} ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.603 [INFO][2758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.633 [INFO][2772] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" HandleID="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.633 [INFO][2772] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" HandleID="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4520), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.161", "pod":"nginx-deployment-7fcdb87857-mz7sh", "timestamp":"2026-01-20 00:54:31.633597325 +0000 UTC"}, Hostname:"10.0.0.161", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.633 [INFO][2772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.633 [INFO][2772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.634 [INFO][2772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.161' Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.641 [INFO][2772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.647 [INFO][2772] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.652 [INFO][2772] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.654 [INFO][2772] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.656 [INFO][2772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.656 [INFO][2772] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.658 [INFO][2772] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856 Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.662 [INFO][2772] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.668 [INFO][2772] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.2/26] block=192.168.64.0/26 handle="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.668 [INFO][2772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.2/26] handle="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" host="10.0.0.161" Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.668 [INFO][2772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:31.685225 containerd[1457]: 2026-01-20 00:54:31.668 [INFO][2772] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.2/26] IPv6=[] ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" HandleID="k8s-pod-network.445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Workload="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.670 [INFO][2758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"602e1611-59a6-40f3-9181-fcec1768be11", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-mz7sh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3cdc56cb06e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.670 [INFO][2758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.2/32] ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.670 [INFO][2758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cdc56cb06e ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.673 [INFO][2758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.673 [INFO][2758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"602e1611-59a6-40f3-9181-fcec1768be11", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856", Pod:"nginx-deployment-7fcdb87857-mz7sh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3cdc56cb06e", MAC:"e2:0c:9c:85:5f:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:31.685900 containerd[1457]: 2026-01-20 00:54:31.682 [INFO][2758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856" Namespace="default" Pod="nginx-deployment-7fcdb87857-mz7sh" WorkloadEndpoint="10.0.0.161-k8s-nginx--deployment--7fcdb87857--mz7sh-eth0" Jan 20 00:54:31.705015 containerd[1457]: time="2026-01-20T00:54:31.704828051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:31.705015 containerd[1457]: time="2026-01-20T00:54:31.704899054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:31.705015 containerd[1457]: time="2026-01-20T00:54:31.704913491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:31.705015 containerd[1457]: time="2026-01-20T00:54:31.705011414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:31.733865 systemd[1]: Started cri-containerd-445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856.scope - libcontainer container 445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856. Jan 20 00:54:31.745324 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:31.769737 containerd[1457]: time="2026-01-20T00:54:31.769595695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mz7sh,Uid:602e1611-59a6-40f3-9181-fcec1768be11,Namespace:default,Attempt:1,} returns sandbox id \"445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856\"" Jan 20 00:54:31.771824 containerd[1457]: time="2026-01-20T00:54:31.771530506Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:54:32.322533 kubelet[1762]: E0120 00:54:32.322220 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:32.334950 kubelet[1762]: I0120 00:54:32.334919 1762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:54:32.335837 kubelet[1762]: E0120 00:54:32.335312 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:32.506931 systemd-networkd[1386]: cali3e76a0bee1a: Gained IPv6LL Jan 20 00:54:32.535069 kubelet[1762]: E0120 00:54:32.534528 1762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:54:32.539768 kubelet[1762]: E0120 00:54:32.539711 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:33.176010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223510225.mount: Deactivated successfully. Jan 20 00:54:33.323252 kubelet[1762]: E0120 00:54:33.323205 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:33.595857 systemd-networkd[1386]: cali3cdc56cb06e: Gained IPv6LL Jan 20 00:54:33.950628 containerd[1457]: time="2026-01-20T00:54:33.950489475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:33.951495 containerd[1457]: time="2026-01-20T00:54:33.951442159Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 20 00:54:33.952773 containerd[1457]: time="2026-01-20T00:54:33.952721929Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:33.955412 containerd[1457]: time="2026-01-20T00:54:33.955352702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:33.956469 containerd[1457]: time="2026-01-20T00:54:33.956414417Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 2.184855297s" Jan 20 00:54:33.956469 containerd[1457]: time="2026-01-20T00:54:33.956459000Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:54:33.961181 containerd[1457]: time="2026-01-20T00:54:33.961128337Z" level=info msg="CreateContainer within sandbox \"445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 20 00:54:33.977190 containerd[1457]: time="2026-01-20T00:54:33.977125003Z" level=info msg="CreateContainer within sandbox \"445e466346e09ed9fecf4aa7777d93a51e647305a11af200015e44188787a856\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"592b9f782a2328a4db3ce751b36cb8fe571dd4dfd86c2c9ed28934ab4d2fcdd5\"" Jan 20 00:54:33.977761 containerd[1457]: time="2026-01-20T00:54:33.977732437Z" level=info msg="StartContainer for \"592b9f782a2328a4db3ce751b36cb8fe571dd4dfd86c2c9ed28934ab4d2fcdd5\"" Jan 20 00:54:34.051866 systemd[1]: Started cri-containerd-592b9f782a2328a4db3ce751b36cb8fe571dd4dfd86c2c9ed28934ab4d2fcdd5.scope - libcontainer container 592b9f782a2328a4db3ce751b36cb8fe571dd4dfd86c2c9ed28934ab4d2fcdd5. Jan 20 00:54:34.078873 containerd[1457]: time="2026-01-20T00:54:34.078808036Z" level=info msg="StartContainer for \"592b9f782a2328a4db3ce751b36cb8fe571dd4dfd86c2c9ed28934ab4d2fcdd5\" returns successfully" Jan 20 00:54:34.324327 kubelet[1762]: E0120 00:54:34.324129 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:34.552945 kubelet[1762]: I0120 00:54:34.552806 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-mz7sh" podStartSLOduration=16.366581458 podStartE2EDuration="18.552789263s" podCreationTimestamp="2026-01-20 00:54:16 +0000 UTC" firstStartedPulling="2026-01-20 00:54:31.771269819 +0000 UTC m=+21.904777879" lastFinishedPulling="2026-01-20 00:54:33.957477624 +0000 UTC m=+24.090985684" observedRunningTime="2026-01-20 00:54:34.552285472 +0000 UTC m=+24.685793542" watchObservedRunningTime="2026-01-20 00:54:34.552789263 +0000 UTC m=+24.686297333" Jan 20 00:54:35.325350 kubelet[1762]: E0120 00:54:35.325205 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:36.325886 kubelet[1762]: E0120 00:54:36.325812 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:37.326146 kubelet[1762]: E0120 00:54:37.326000 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:38.327203 kubelet[1762]: E0120 00:54:38.327094 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:38.628297 systemd[1]: Created slice kubepods-besteffort-podb9b610d6_f883_4a0e_ae3b_cb192fd23999.slice - libcontainer container kubepods-besteffort-podb9b610d6_f883_4a0e_ae3b_cb192fd23999.slice. Jan 20 00:54:38.722814 kubelet[1762]: I0120 00:54:38.722732 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk22g\" (UniqueName: \"kubernetes.io/projected/b9b610d6-f883-4a0e-ae3b-cb192fd23999-kube-api-access-mk22g\") pod \"nfs-server-provisioner-0\" (UID: \"b9b610d6-f883-4a0e-ae3b-cb192fd23999\") " pod="default/nfs-server-provisioner-0" Jan 20 00:54:38.722814 kubelet[1762]: I0120 00:54:38.722813 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b9b610d6-f883-4a0e-ae3b-cb192fd23999-data\") pod \"nfs-server-provisioner-0\" (UID: \"b9b610d6-f883-4a0e-ae3b-cb192fd23999\") " pod="default/nfs-server-provisioner-0" Jan 20 00:54:38.932182 containerd[1457]: time="2026-01-20T00:54:38.932125153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b9b610d6-f883-4a0e-ae3b-cb192fd23999,Namespace:default,Attempt:0,}" Jan 20 00:54:39.050110 systemd-networkd[1386]: cali60e51b789ff: Link UP Jan 20 00:54:39.051979 systemd-networkd[1386]: cali60e51b789ff: Gained carrier Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:38.974 [INFO][2979] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.161-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b9b610d6-f883-4a0e-ae3b-cb192fd23999 1324 0 2026-01-20 00:54:38 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.161 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:38.974 [INFO][2979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.001 [INFO][2994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" HandleID="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Workload="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.002 [INFO][2994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" HandleID="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Workload="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a0bb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.161", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-20 00:54:39.00132285 +0000 UTC"}, Hostname:"10.0.0.161", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.003 [INFO][2994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.003 [INFO][2994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.003 [INFO][2994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.161' Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.014 [INFO][2994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.021 [INFO][2994] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.028 [INFO][2994] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.031 [INFO][2994] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.033 [INFO][2994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.033 [INFO][2994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.035 [INFO][2994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798 Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.039 [INFO][2994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.045 [INFO][2994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.3/26] block=192.168.64.0/26 handle="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.045 [INFO][2994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.3/26] handle="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" host="10.0.0.161" Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.045 [INFO][2994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:39.063881 containerd[1457]: 2026-01-20 00:54:39.045 [INFO][2994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.3/26] IPv6=[] ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" HandleID="k8s-pod-network.59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Workload="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.064430 containerd[1457]: 2026-01-20 00:54:39.047 [INFO][2979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b9b610d6-f883-4a0e-ae3b-cb192fd23999", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:39.064430 containerd[1457]: 2026-01-20 00:54:39.048 [INFO][2979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.3/32] ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.064430 containerd[1457]: 2026-01-20 00:54:39.048 [INFO][2979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.064430 containerd[1457]: 2026-01-20 00:54:39.050 [INFO][2979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.064574 containerd[1457]: 2026-01-20 00:54:39.052 [INFO][2979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b9b610d6-f883-4a0e-ae3b-cb192fd23999", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.64.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"52:35:5c:a6:95:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:39.064574 containerd[1457]: 2026-01-20 00:54:39.060 [INFO][2979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.161-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:54:39.111351 containerd[1457]: time="2026-01-20T00:54:39.111239594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:39.112358 containerd[1457]: time="2026-01-20T00:54:39.112074183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:39.112358 containerd[1457]: time="2026-01-20T00:54:39.112124668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:39.112358 containerd[1457]: time="2026-01-20T00:54:39.112258157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:39.135861 systemd[1]: Started cri-containerd-59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798.scope - libcontainer container 59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798. Jan 20 00:54:39.147813 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:39.172927 containerd[1457]: time="2026-01-20T00:54:39.172846237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b9b610d6-f883-4a0e-ae3b-cb192fd23999,Namespace:default,Attempt:0,} returns sandbox id \"59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798\"" Jan 20 00:54:39.174554 containerd[1457]: time="2026-01-20T00:54:39.174522621Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 20 00:54:39.327481 kubelet[1762]: E0120 00:54:39.327339 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:40.328461 kubelet[1762]: E0120 00:54:40.328406 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:40.954927 systemd-networkd[1386]: cali60e51b789ff: Gained IPv6LL Jan 20 00:54:40.966306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203181448.mount: Deactivated successfully. Jan 20 00:54:41.329267 kubelet[1762]: E0120 00:54:41.329035 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:42.330149 kubelet[1762]: E0120 00:54:42.330074 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:42.602597 containerd[1457]: time="2026-01-20T00:54:42.602447738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:42.603561 containerd[1457]: time="2026-01-20T00:54:42.603500181Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 20 00:54:42.604985 containerd[1457]: time="2026-01-20T00:54:42.604918806Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:42.607920 containerd[1457]: time="2026-01-20T00:54:42.607847160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:42.608846 containerd[1457]: time="2026-01-20T00:54:42.608811298Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.434249525s" Jan 20 00:54:42.608896 containerd[1457]: time="2026-01-20T00:54:42.608847807Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 20 00:54:42.613376 containerd[1457]: time="2026-01-20T00:54:42.613335320Z" level=info msg="CreateContainer within sandbox \"59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 20 00:54:42.629049 containerd[1457]: time="2026-01-20T00:54:42.629002091Z" level=info msg="CreateContainer within sandbox \"59575cee6577ea9809a50b52c9d9d289041fbc474b0d817ba784d20f91285798\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b5ae8469c7d7f6d536e61448bef309930af1f311ad31759280f0c9dd00745910\"" Jan 20 00:54:42.629549 containerd[1457]: time="2026-01-20T00:54:42.629502488Z" level=info msg="StartContainer for \"b5ae8469c7d7f6d536e61448bef309930af1f311ad31759280f0c9dd00745910\"" Jan 20 00:54:42.659846 systemd[1]: Started cri-containerd-b5ae8469c7d7f6d536e61448bef309930af1f311ad31759280f0c9dd00745910.scope - libcontainer container b5ae8469c7d7f6d536e61448bef309930af1f311ad31759280f0c9dd00745910. Jan 20 00:54:42.685985 containerd[1457]: time="2026-01-20T00:54:42.685944407Z" level=info msg="StartContainer for \"b5ae8469c7d7f6d536e61448bef309930af1f311ad31759280f0c9dd00745910\" returns successfully" Jan 20 00:54:43.330307 kubelet[1762]: E0120 00:54:43.330197 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:43.571570 kubelet[1762]: I0120 00:54:43.571515 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.1359508 podStartE2EDuration="5.571502729s" podCreationTimestamp="2026-01-20 00:54:38 +0000 UTC" firstStartedPulling="2026-01-20 00:54:39.17417652 +0000 UTC m=+29.307684580" lastFinishedPulling="2026-01-20 00:54:42.60972845 +0000 UTC m=+32.743236509" observedRunningTime="2026-01-20 00:54:43.571278834 +0000 UTC m=+33.704786894" watchObservedRunningTime="2026-01-20 00:54:43.571502729 +0000 UTC m=+33.705010788" Jan 20 00:54:44.330439 kubelet[1762]: E0120 00:54:44.330306 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:45.331236 kubelet[1762]: E0120 00:54:45.331154 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:46.331855 kubelet[1762]: E0120 00:54:46.331773 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:47.029888 update_engine[1451]: I20260120 00:54:47.029739 1451 update_attempter.cc:509] Updating boot flags... Jan 20 00:54:47.067731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3167) Jan 20 00:54:47.108776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3170) Jan 20 00:54:47.333130 kubelet[1762]: E0120 00:54:47.332985 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:47.478562 containerd[1457]: time="2026-01-20T00:54:47.478522803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:54:47.583932 containerd[1457]: time="2026-01-20T00:54:47.583758562Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:47.585513 containerd[1457]: time="2026-01-20T00:54:47.585457818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:54:47.585599 containerd[1457]: time="2026-01-20T00:54:47.585487810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:54:47.585876 kubelet[1762]: E0120 00:54:47.585826 1762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:47.585936 kubelet[1762]: E0120 00:54:47.585890 1762 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:54:47.586142 kubelet[1762]: E0120 00:54:47.586072 1762 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6mnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:47.587999 containerd[1457]: time="2026-01-20T00:54:47.587968904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:54:47.646895 containerd[1457]: time="2026-01-20T00:54:47.646841293Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:54:47.648348 containerd[1457]: time="2026-01-20T00:54:47.648282363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:54:47.648400 containerd[1457]: time="2026-01-20T00:54:47.648323684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:54:47.648579 kubelet[1762]: E0120 00:54:47.648540 1762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:47.648626 kubelet[1762]: E0120 00:54:47.648588 1762 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:54:47.648843 kubelet[1762]: E0120 00:54:47.648774 1762 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6mnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtc4t_calico-system(cabaf7fc-f028-4962-a01c-5241ddd73130): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:54:47.650342 kubelet[1762]: E0120 00:54:47.650268 1762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtc4t" podUID="cabaf7fc-f028-4962-a01c-5241ddd73130" Jan 20 00:54:47.919319 systemd[1]: Created slice kubepods-besteffort-podfaa2c571_7864_4c14_b198_24c9799e7968.slice - libcontainer container kubepods-besteffort-podfaa2c571_7864_4c14_b198_24c9799e7968.slice. Jan 20 00:54:47.987958 kubelet[1762]: I0120 00:54:47.987883 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d63b0d9-321d-4572-8f7e-ecdf1d97cc20\" (UniqueName: \"kubernetes.io/nfs/faa2c571-7864-4c14-b198-24c9799e7968-pvc-0d63b0d9-321d-4572-8f7e-ecdf1d97cc20\") pod \"test-pod-1\" (UID: \"faa2c571-7864-4c14-b198-24c9799e7968\") " pod="default/test-pod-1" Jan 20 00:54:47.988072 kubelet[1762]: I0120 00:54:47.987931 1762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhfw\" (UniqueName: \"kubernetes.io/projected/faa2c571-7864-4c14-b198-24c9799e7968-kube-api-access-pdhfw\") pod \"test-pod-1\" (UID: \"faa2c571-7864-4c14-b198-24c9799e7968\") " pod="default/test-pod-1" Jan 20 00:54:48.121785 kernel: FS-Cache: Loaded Jan 20 00:54:48.190105 kernel: RPC: Registered named UNIX socket transport module. Jan 20 00:54:48.190202 kernel: RPC: Registered udp transport module. Jan 20 00:54:48.190220 kernel: RPC: Registered tcp transport module. Jan 20 00:54:48.193023 kernel: RPC: Registered tcp-with-tls transport module. Jan 20 00:54:48.193112 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 20 00:54:48.333709 kubelet[1762]: E0120 00:54:48.333608 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:48.435899 kernel: NFS: Registering the id_resolver key type Jan 20 00:54:48.435977 kernel: Key type id_resolver registered Jan 20 00:54:48.436009 kernel: Key type id_legacy registered Jan 20 00:54:48.473250 nfsidmap[3191]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:54:48.480373 nfsidmap[3194]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:54:48.522186 containerd[1457]: time="2026-01-20T00:54:48.522122598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:faa2c571-7864-4c14-b198-24c9799e7968,Namespace:default,Attempt:0,}" Jan 20 00:54:48.661390 systemd-networkd[1386]: cali5ec59c6bf6e: Link UP Jan 20 00:54:48.662422 systemd-networkd[1386]: cali5ec59c6bf6e: Gained carrier Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.594 [INFO][3197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.161-k8s-test--pod--1-eth0 default faa2c571-7864-4c14-b198-24c9799e7968 1395 0 2026-01-20 00:54:38 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.161 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.594 [INFO][3197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.619 [INFO][3211] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" HandleID="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Workload="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.619 [INFO][3211] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" HandleID="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Workload="10.0.0.161-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ead0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.161", "pod":"test-pod-1", "timestamp":"2026-01-20 00:54:48.619725214 +0000 UTC"}, Hostname:"10.0.0.161", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.619 [INFO][3211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.619 [INFO][3211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.620 [INFO][3211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.161' Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.627 [INFO][3211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.632 [INFO][3211] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.637 [INFO][3211] ipam/ipam.go 511: Trying affinity for 192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.639 [INFO][3211] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.642 [INFO][3211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.0/26 host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.642 [INFO][3211] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.0/26 handle="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.643 [INFO][3211] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.650 [INFO][3211] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.0/26 handle="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.655 [INFO][3211] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.4/26] block=192.168.64.0/26 handle="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.655 [INFO][3211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.4/26] handle="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" host="10.0.0.161" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.655 [INFO][3211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.655 [INFO][3211] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.4/26] IPv6=[] ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" HandleID="k8s-pod-network.bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Workload="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.673718 containerd[1457]: 2026-01-20 00:54:48.658 [INFO][3197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"faa2c571-7864-4c14-b198-24c9799e7968", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:48.674268 containerd[1457]: 2026-01-20 00:54:48.659 [INFO][3197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.4/32] ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.674268 containerd[1457]: 2026-01-20 00:54:48.659 [INFO][3197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.674268 containerd[1457]: 2026-01-20 00:54:48.661 [INFO][3197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.674268 containerd[1457]: 2026-01-20 00:54:48.662 [INFO][3197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.161-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"faa2c571-7864-4c14-b198-24c9799e7968", ResourceVersion:"1395", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 54, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.161", ContainerID:"bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.64.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:38:ae:d0:80:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:54:48.674268 containerd[1457]: 2026-01-20 00:54:48.669 [INFO][3197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.161-k8s-test--pod--1-eth0" Jan 20 00:54:48.693900 containerd[1457]: time="2026-01-20T00:54:48.693803213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:54:48.693900 containerd[1457]: time="2026-01-20T00:54:48.693883362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:54:48.694005 containerd[1457]: time="2026-01-20T00:54:48.693897488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:48.694030 containerd[1457]: time="2026-01-20T00:54:48.693980103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:54:48.719863 systemd[1]: Started cri-containerd-bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be.scope - libcontainer container bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be. Jan 20 00:54:48.732210 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:54:48.758281 containerd[1457]: time="2026-01-20T00:54:48.758214649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:faa2c571-7864-4c14-b198-24c9799e7968,Namespace:default,Attempt:0,} returns sandbox id \"bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be\"" Jan 20 00:54:48.759540 containerd[1457]: time="2026-01-20T00:54:48.759476134Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:54:48.854858 containerd[1457]: time="2026-01-20T00:54:48.854804513Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:54:48.855977 containerd[1457]: time="2026-01-20T00:54:48.855899359Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 20 00:54:48.858994 containerd[1457]: time="2026-01-20T00:54:48.858951261Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 99.442556ms" Jan 20 00:54:48.858994 containerd[1457]: time="2026-01-20T00:54:48.858989402Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:54:48.866221 containerd[1457]: time="2026-01-20T00:54:48.866092306Z" level=info msg="CreateContainer within sandbox \"bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 20 00:54:48.881486 containerd[1457]: time="2026-01-20T00:54:48.881441830Z" level=info msg="CreateContainer within sandbox \"bd28ee1664fd674a7e1c6583fa0901b6971799671797a820b899ba4025a8c3be\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1fa89a839a67fc8d1a52a57d84ae949aa863eb3e6a4f7ad15d963dbdcbb1c24d\"" Jan 20 00:54:48.882133 containerd[1457]: time="2026-01-20T00:54:48.882064874Z" level=info msg="StartContainer for \"1fa89a839a67fc8d1a52a57d84ae949aa863eb3e6a4f7ad15d963dbdcbb1c24d\"" Jan 20 00:54:48.911856 systemd[1]: Started cri-containerd-1fa89a839a67fc8d1a52a57d84ae949aa863eb3e6a4f7ad15d963dbdcbb1c24d.scope - libcontainer container 1fa89a839a67fc8d1a52a57d84ae949aa863eb3e6a4f7ad15d963dbdcbb1c24d. Jan 20 00:54:48.942121 containerd[1457]: time="2026-01-20T00:54:48.942052682Z" level=info msg="StartContainer for \"1fa89a839a67fc8d1a52a57d84ae949aa863eb3e6a4f7ad15d963dbdcbb1c24d\" returns successfully" Jan 20 00:54:49.334085 kubelet[1762]: E0120 00:54:49.334007 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:49.580073 kubelet[1762]: I0120 00:54:49.580007 1762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.476847511999999 podStartE2EDuration="11.579992519s" podCreationTimestamp="2026-01-20 00:54:38 +0000 UTC" firstStartedPulling="2026-01-20 00:54:48.759003652 +0000 UTC m=+38.892511713" lastFinishedPulling="2026-01-20 00:54:48.86214866 +0000 UTC m=+38.995656720" observedRunningTime="2026-01-20 00:54:49.579757414 +0000 UTC m=+39.713265474" watchObservedRunningTime="2026-01-20 00:54:49.579992519 +0000 UTC m=+39.713500578" Jan 20 00:54:50.305078 kubelet[1762]: E0120 00:54:50.304996 1762 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:50.335009 kubelet[1762]: E0120 00:54:50.334928 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:50.554947 systemd-networkd[1386]: cali5ec59c6bf6e: Gained IPv6LL Jan 20 00:54:51.336057 kubelet[1762]: E0120 00:54:51.336002 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:52.336915 kubelet[1762]: E0120 00:54:52.336805 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:53.337168 kubelet[1762]: E0120 00:54:53.336964 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:54.337223 kubelet[1762]: E0120 00:54:54.337138 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:55.338340 kubelet[1762]: E0120 00:54:55.338305 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:54:56.339018 kubelet[1762]: E0120 00:54:56.338938 1762 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"