Jan 20 00:48:35.297919 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:48:35.298008 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:35.298024 kernel: BIOS-provided physical RAM map: Jan 20 00:48:35.298033 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:48:35.298090 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:48:35.298099 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:48:35.298109 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:48:35.298118 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:48:35.298127 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:48:35.298135 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:48:35.298149 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:48:35.298158 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:48:35.298186 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:48:35.298196 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:48:35.298223 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:48:35.298233 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:48:35.298247 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:48:35.298256 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:48:35.298265 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:48:35.298275 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:48:35.298284 kernel: NX (Execute Disable) protection: active Jan 20 00:48:35.298293 kernel: APIC: Static calls initialized Jan 20 00:48:35.298303 kernel: efi: EFI v2.7 by EDK II Jan 20 00:48:35.298312 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:48:35.298321 kernel: SMBIOS 2.8 present. Jan 20 00:48:35.298330 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:48:35.298340 kernel: Hypervisor detected: KVM Jan 20 00:48:35.298352 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:48:35.298362 kernel: kvm-clock: using sched offset of 15928325811 cycles Jan 20 00:48:35.298372 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:48:35.298382 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:48:35.298391 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:48:35.298402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:48:35.298411 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:48:35.298421 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:48:35.298431 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:48:35.298444 kernel: Using GB pages for direct mapping Jan 20 00:48:35.298454 kernel: Secure boot disabled Jan 20 00:48:35.298464 kernel: ACPI: Early table checksum verification disabled Jan 20 00:48:35.298474 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:48:35.298489 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:48:35.298499 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303487 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303555 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:48:35.303566 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303603 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303613 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303624 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:48:35.303634 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:48:35.303644 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:48:35.303660 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:48:35.303670 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:48:35.303680 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:48:35.303690 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:48:35.303701 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:48:35.303711 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:48:35.303721 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:48:35.303731 kernel: No NUMA configuration found Jan 20 00:48:35.303758 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:48:35.303772 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:48:35.303783 kernel: Zone ranges: Jan 20 00:48:35.303793 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:48:35.303804 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:48:35.303814 kernel: Normal empty Jan 20 00:48:35.303824 kernel: Movable zone start for each node Jan 20 00:48:35.303834 kernel: Early memory node ranges Jan 20 00:48:35.303844 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:48:35.303854 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:48:35.303868 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:48:35.303878 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:48:35.303888 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:48:35.303898 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:48:35.303926 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:48:35.303936 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:48:35.303946 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:48:35.303956 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:48:35.303966 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:48:35.303976 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:48:35.303990 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:48:35.304000 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:48:35.304010 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:48:35.304020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:48:35.304030 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:48:35.304087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:48:35.304099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:48:35.304109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:48:35.304119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:48:35.304134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:48:35.304144 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:48:35.304154 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:48:35.304164 kernel: TSC deadline timer available Jan 20 00:48:35.304174 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:48:35.304184 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:48:35.304194 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:48:35.304204 kernel: kvm-guest: setup PV sched yield Jan 20 00:48:35.304214 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:48:35.304228 kernel: Booting paravirtualized kernel on KVM Jan 20 00:48:35.304238 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:48:35.304249 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:48:35.304261 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:48:35.304271 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:48:35.304282 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:48:35.304291 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:48:35.304302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:48:35.304314 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:35.304357 kernel: random: crng init done Jan 20 00:48:35.304371 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:48:35.304385 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:48:35.304396 kernel: Fallback order for Node 0: 0 Jan 20 00:48:35.304406 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:48:35.304417 kernel: Policy zone: DMA32 Jan 20 00:48:35.304428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:48:35.304439 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166124K reserved, 0K cma-reserved) Jan 20 00:48:35.304455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:48:35.304465 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:48:35.304475 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:48:35.304485 kernel: Dynamic Preempt: voluntary Jan 20 00:48:35.304496 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:48:35.305622 kernel: rcu: RCU event tracing is enabled. Jan 20 00:48:35.305642 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:48:35.305653 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:48:35.305664 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:48:35.305676 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:48:35.305686 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:48:35.305697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:48:35.305713 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:48:35.305731 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:48:35.305742 kernel: Console: colour dummy device 80x25 Jan 20 00:48:35.305755 kernel: printk: console [ttyS0] enabled Jan 20 00:48:35.305801 kernel: ACPI: Core revision 20230628 Jan 20 00:48:35.305815 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:48:35.305827 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:48:35.305840 kernel: x2apic enabled Jan 20 00:48:35.305855 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:48:35.305869 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:48:35.305883 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:48:35.305897 kernel: kvm-guest: setup PV IPIs Jan 20 00:48:35.305911 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:48:35.305926 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:48:35.305946 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:48:35.305961 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:48:35.305975 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:48:35.305989 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:48:35.306004 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:48:35.306018 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:48:35.306032 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:48:35.306093 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:48:35.306114 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:48:35.306130 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:48:35.306144 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:48:35.306159 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:48:35.306172 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:48:35.306210 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:48:35.306224 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:48:35.306239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:48:35.306252 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:48:35.306268 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:48:35.306279 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:48:35.306290 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:48:35.306302 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:48:35.306313 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:48:35.306325 kernel: landlock: Up and running. Jan 20 00:48:35.306337 kernel: SELinux: Initializing. Jan 20 00:48:35.306348 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:48:35.306360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:48:35.306377 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:48:35.306389 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:35.306400 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:35.306413 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:48:35.306424 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:48:35.306436 kernel: signal: max sigframe size: 1776 Jan 20 00:48:35.306447 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:48:35.306459 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:48:35.306475 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:48:35.306487 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:48:35.306498 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:48:35.306552 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:48:35.306565 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:48:35.306577 kernel: smpboot: Max logical packages: 1 Jan 20 00:48:35.306589 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:48:35.306600 kernel: devtmpfs: initialized Jan 20 00:48:35.306611 kernel: x86/mm: Memory block size: 128MB Jan 20 00:48:35.306623 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:48:35.306640 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:48:35.306652 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:48:35.306664 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:48:35.306675 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:48:35.306687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:48:35.306699 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:48:35.306711 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:48:35.306723 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:48:35.306739 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:48:35.306751 kernel: audit: type=2000 audit(1768870107.583:1): state=initialized audit_enabled=0 res=1 Jan 20 00:48:35.306762 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:48:35.306774 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:48:35.306785 kernel: cpuidle: using governor menu Jan 20 00:48:35.306797 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:48:35.306808 kernel: dca service started, version 1.12.1 Jan 20 00:48:35.306820 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:48:35.306832 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:48:35.306849 kernel: PCI: Using configuration type 1 for base access Jan 20 00:48:35.306860 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:48:35.306872 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:48:35.306884 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:48:35.306896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:48:35.306908 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:48:35.306920 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:48:35.306931 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:48:35.306942 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:48:35.306958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:48:35.306970 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:48:35.306981 kernel: ACPI: Interpreter enabled Jan 20 00:48:35.306993 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:48:35.307005 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:48:35.307016 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:48:35.307028 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:48:35.307098 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:48:35.307112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:48:35.310939 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:48:35.311299 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:48:35.316727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:48:35.316769 kernel: PCI host bridge to bus 0000:00 Jan 20 00:48:35.317158 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:48:35.317398 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:48:35.319849 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:48:35.320101 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:48:35.320300 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:48:35.320485 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:48:35.320709 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:48:35.320986 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:48:35.321281 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:48:35.321498 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:48:35.323869 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:48:35.324158 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:48:35.324369 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:48:35.324621 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:48:35.324927 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:48:35.325194 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:48:35.325407 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:48:35.327805 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:48:35.328168 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:48:35.328413 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:48:35.333804 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:48:35.334103 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:48:35.334417 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:48:35.334716 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:48:35.334967 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:48:35.335268 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:48:35.335474 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:48:35.336792 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:48:35.337011 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:48:35.337397 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:48:35.340170 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:48:35.340396 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:48:35.340676 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:48:35.340894 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:48:35.340914 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:48:35.340926 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:48:35.340938 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:48:35.340959 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:48:35.340970 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:48:35.340982 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:48:35.340994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:48:35.341005 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:48:35.341017 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:48:35.341028 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:48:35.341100 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:48:35.341113 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:48:35.341131 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:48:35.341143 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:48:35.341155 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:48:35.341166 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:48:35.341178 kernel: iommu: Default domain type: Translated Jan 20 00:48:35.341189 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:48:35.341201 kernel: efivars: Registered efivars operations Jan 20 00:48:35.341213 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:48:35.341225 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:48:35.341241 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:48:35.341253 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:48:35.341264 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:48:35.341276 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:48:35.341487 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:48:35.341742 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:48:35.341949 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:48:35.341967 kernel: vgaarb: loaded Jan 20 00:48:35.341986 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:48:35.341998 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:48:35.342010 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:48:35.342021 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:48:35.342033 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:48:35.342105 kernel: pnp: PnP ACPI init Jan 20 00:48:35.342446 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:48:35.342467 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:48:35.342480 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:48:35.342497 kernel: NET: Registered PF_INET protocol family Jan 20 00:48:35.347626 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:48:35.347653 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:48:35.347666 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:48:35.347678 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:48:35.347690 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:48:35.347702 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:48:35.347713 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:48:35.347733 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:48:35.347745 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:48:35.347757 kernel: NET: Registered PF_XDP protocol family Jan 20 00:48:35.348031 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:48:35.348342 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:48:35.348581 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:48:35.348774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:48:35.348965 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:48:35.349236 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:48:35.349425 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:48:35.351766 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:48:35.351791 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:48:35.351804 kernel: Initialise system trusted keyrings Jan 20 00:48:35.351816 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:48:35.351828 kernel: Key type asymmetric registered Jan 20 00:48:35.351839 kernel: Asymmetric key parser 'x509' registered Jan 20 00:48:35.351850 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:48:35.351870 kernel: io scheduler mq-deadline registered Jan 20 00:48:35.351882 kernel: io scheduler kyber registered Jan 20 00:48:35.351895 kernel: io scheduler bfq registered Jan 20 00:48:35.351908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:48:35.351922 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:48:35.351936 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:48:35.351949 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:48:35.351961 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:48:35.351974 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:48:35.351993 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:48:35.352007 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:48:35.352020 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:48:35.352459 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:48:35.352485 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:48:35.353830 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:48:35.354117 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:48:33 UTC (1768870113) Jan 20 00:48:35.354341 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:48:35.354371 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:48:35.354386 kernel: efifb: probing for efifb Jan 20 00:48:35.354399 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:48:35.354412 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:48:35.354424 kernel: efifb: scrolling: redraw Jan 20 00:48:35.354436 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:48:35.354448 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:48:35.354460 kernel: fb0: EFI VGA frame buffer device Jan 20 00:48:35.354471 kernel: pstore: Using crash dump compression: deflate Jan 20 00:48:35.354490 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:48:35.354503 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:48:35.354561 kernel: Segment Routing with IPv6 Jan 20 00:48:35.354574 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:48:35.354587 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:48:35.354599 kernel: Key type dns_resolver registered Jan 20 00:48:35.354612 kernel: IPI shorthand broadcast: enabled Jan 20 00:48:35.354653 kernel: sched_clock: Marking stable (5329036823, 1054302525)->(7557168761, -1173829413) Jan 20 00:48:35.354670 kernel: registered taskstats version 1 Jan 20 00:48:35.354685 kernel: Loading compiled-in X.509 certificates Jan 20 00:48:35.354697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:48:35.354709 kernel: Key type .fscrypt registered Jan 20 00:48:35.354721 kernel: Key type fscrypt-provisioning registered Jan 20 00:48:35.354734 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:48:35.354747 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:48:35.354759 kernel: ima: No architecture policies found Jan 20 00:48:35.354772 kernel: clk: Disabling unused clocks Jan 20 00:48:35.354785 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:48:35.354804 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:48:35.354817 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:48:35.354830 kernel: Run /init as init process Jan 20 00:48:35.354842 kernel: with arguments: Jan 20 00:48:35.354855 kernel: /init Jan 20 00:48:35.354869 kernel: with environment: Jan 20 00:48:35.354881 kernel: HOME=/ Jan 20 00:48:35.354894 kernel: TERM=linux Jan 20 00:48:35.354911 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:48:35.354934 systemd[1]: Detected virtualization kvm. Jan 20 00:48:35.354948 systemd[1]: Detected architecture x86-64. Jan 20 00:48:35.354962 systemd[1]: Running in initrd. Jan 20 00:48:35.354974 systemd[1]: No hostname configured, using default hostname. Jan 20 00:48:35.354985 systemd[1]: Hostname set to . Jan 20 00:48:35.354999 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:48:35.355019 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:48:35.355031 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:35.355109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:35.355128 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:48:35.355143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:48:35.355156 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:48:35.355174 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:48:35.355186 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:48:35.355197 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:48:35.355208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:35.355219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:48:35.355229 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:48:35.355245 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:48:35.355256 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:48:35.355268 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:48:35.355280 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:48:35.355294 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:48:35.355307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:48:35.355319 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:48:35.355330 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:35.355341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:35.355358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:35.355369 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:48:35.355380 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:48:35.355392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:48:35.355405 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:48:35.355419 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:48:35.355432 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:48:35.355443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:48:35.355460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:35.355473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:48:35.355572 systemd-journald[195]: Collecting audit messages is disabled. Jan 20 00:48:35.355607 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:35.355635 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:48:35.355650 systemd-journald[195]: Journal started Jan 20 00:48:35.355676 systemd-journald[195]: Runtime Journal (/run/log/journal/1d56db0f13c148c494dcc398801b6026) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:48:35.370691 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:48:35.360423 systemd-modules-load[196]: Inserted module 'overlay' Jan 20 00:48:35.401128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:48:35.406890 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:35.420488 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:48:35.513194 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:35.525816 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:48:35.526131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:48:35.530358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:48:35.591129 kernel: Bridge firewalling registered Jan 20 00:48:35.605741 systemd-modules-load[196]: Inserted module 'br_netfilter' Jan 20 00:48:35.611183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:35.611724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:48:35.621238 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:35.660659 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:48:35.668604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:48:35.670082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:48:35.717005 dracut-cmdline[224]: dracut-dracut-053 Jan 20 00:48:35.724978 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:48:35.790164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:35.811849 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:48:35.894883 systemd-resolved[263]: Positive Trust Anchors: Jan 20 00:48:35.894900 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:48:35.894948 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:48:35.903162 systemd-resolved[263]: Defaulting to hostname 'linux'. Jan 20 00:48:35.906626 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:48:35.962116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:48:36.045628 kernel: SCSI subsystem initialized Jan 20 00:48:36.061313 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:48:36.108186 kernel: iscsi: registered transport (tcp) Jan 20 00:48:36.197733 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:48:36.197843 kernel: QLogic iSCSI HBA Driver Jan 20 00:48:36.326463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:48:36.372375 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:48:36.457597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:48:36.457691 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:48:36.462727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:48:36.549205 kernel: raid6: avx2x4 gen() 15499 MB/s Jan 20 00:48:36.575661 kernel: raid6: avx2x2 gen() 13498 MB/s Jan 20 00:48:36.592435 kernel: raid6: avx2x1 gen() 9845 MB/s Jan 20 00:48:36.592553 kernel: raid6: using algorithm avx2x4 gen() 15499 MB/s Jan 20 00:48:36.615317 kernel: raid6: .... xor() 4624 MB/s, rmw enabled Jan 20 00:48:36.615420 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:48:36.675618 kernel: xor: automatically using best checksumming function avx Jan 20 00:48:37.091705 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:48:37.121696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:48:37.157874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:48:37.200362 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 20 00:48:37.213844 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:48:37.239608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:48:37.303436 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 20 00:48:37.406859 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:48:37.434342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:48:37.657163 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:37.689788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:48:37.736360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:48:37.758756 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:48:37.766664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:37.782007 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:48:37.832713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:48:37.872314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:48:37.872671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:37.873016 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:37.873140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:48:37.873354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:37.875147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:37.881620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:48:37.934390 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:48:38.043658 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:48:38.068893 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:48:38.077746 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:48:38.069684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:38.111122 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:48:38.111153 kernel: GPT:9289727 != 19775487 Jan 20 00:48:38.111169 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:48:38.111184 kernel: GPT:9289727 != 19775487 Jan 20 00:48:38.111197 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:48:38.111212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:38.251982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:48:38.302761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:38.338130 kernel: libata version 3.00 loaded. Jan 20 00:48:38.383203 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:48:38.385683 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:48:38.430782 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:48:38.446667 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:48:38.448405 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:48:38.470224 kernel: scsi host0: ahci Jan 20 00:48:38.471795 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Jan 20 00:48:38.494126 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Jan 20 00:48:38.501783 kernel: AES CTR mode by8 optimization enabled Jan 20 00:48:38.501844 kernel: scsi host1: ahci Jan 20 00:48:38.511670 kernel: scsi host2: ahci Jan 20 00:48:38.515707 kernel: scsi host3: ahci Jan 20 00:48:38.531897 kernel: scsi host4: ahci Jan 20 00:48:38.532288 kernel: scsi host5: ahci Jan 20 00:48:38.540103 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:48:38.540164 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:48:38.548672 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:48:38.617144 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:48:38.617190 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:48:38.617210 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:48:38.617226 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:48:38.581674 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:48:38.595163 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:48:38.607230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:48:38.629866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:48:38.678571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:48:38.708320 disk-uuid[566]: Primary Header is updated. Jan 20 00:48:38.708320 disk-uuid[566]: Secondary Entries is updated. Jan 20 00:48:38.708320 disk-uuid[566]: Secondary Header is updated. Jan 20 00:48:38.731132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:38.738157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:38.760386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:38.868125 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:38.878760 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:38.892115 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:38.892184 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:38.910193 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:48:38.944726 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:48:38.944875 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:48:38.944896 kernel: ata3.00: applying bridge limits Jan 20 00:48:38.963138 kernel: ata3.00: configured for UDMA/100 Jan 20 00:48:38.971169 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:48:39.153493 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:48:39.154388 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:48:39.173228 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:48:39.773921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:48:39.781702 disk-uuid[567]: The operation has completed successfully. Jan 20 00:48:39.941741 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:48:39.946951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:48:39.988724 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:48:40.021725 sh[597]: Success Jan 20 00:48:40.107671 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:48:40.249307 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:48:40.289854 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:48:40.333281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:48:40.376153 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:48:40.376243 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:40.376271 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:48:40.379963 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:48:40.385525 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:48:40.436352 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:48:40.439247 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:48:40.491401 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:48:40.506785 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:48:40.553133 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:40.553208 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:40.553227 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:40.584484 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:40.632410 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:48:40.652846 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:40.687672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:48:40.713440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:48:40.955898 ignition[693]: Ignition 2.19.0 Jan 20 00:48:40.955954 ignition[693]: Stage: fetch-offline Jan 20 00:48:40.956030 ignition[693]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:40.956104 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:40.956296 ignition[693]: parsed url from cmdline: "" Jan 20 00:48:40.956304 ignition[693]: no config URL provided Jan 20 00:48:40.956317 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:48:40.956335 ignition[693]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:48:40.956375 ignition[693]: op(1): [started] loading QEMU firmware config module Jan 20 00:48:40.956384 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:48:41.010960 ignition[693]: op(1): [finished] loading QEMU firmware config module Jan 20 00:48:41.012874 ignition[693]: parsing config with SHA512: 439376736908b7ac734f439bb2eb03f2f901bc756fe93db2c61bd7927e9a3ff05f30d038adf6b123df4634666b76d7dcd54f561d11f0be6bf984876bcda5a0fc Jan 20 00:48:41.028865 unknown[693]: fetched base config from "system" Jan 20 00:48:41.028885 unknown[693]: fetched user config from "qemu" Jan 20 00:48:41.029458 ignition[693]: fetch-offline: fetch-offline passed Jan 20 00:48:41.035146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:48:41.029615 ignition[693]: Ignition finished successfully Jan 20 00:48:41.044983 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:48:41.090506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:48:41.186534 systemd-networkd[786]: lo: Link UP Jan 20 00:48:41.189642 systemd-networkd[786]: lo: Gained carrier Jan 20 00:48:41.194772 systemd-networkd[786]: Enumeration completed Jan 20 00:48:41.195510 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:48:41.195987 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:41.195994 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:48:41.213993 systemd-networkd[786]: eth0: Link UP Jan 20 00:48:41.214007 systemd-networkd[786]: eth0: Gained carrier Jan 20 00:48:41.214025 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:48:41.241126 systemd[1]: Reached target network.target - Network. Jan 20 00:48:41.256873 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:48:41.320312 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:48:41.392581 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:48:41.628028 ignition[789]: Ignition 2.19.0 Jan 20 00:48:41.628106 ignition[789]: Stage: kargs Jan 20 00:48:41.649492 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:41.649603 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:41.687197 ignition[789]: kargs: kargs passed Jan 20 00:48:41.687625 ignition[789]: Ignition finished successfully Jan 20 00:48:41.699860 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:48:41.733528 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:48:41.784306 ignition[797]: Ignition 2.19.0 Jan 20 00:48:41.784325 ignition[797]: Stage: disks Jan 20 00:48:41.784650 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:41.795459 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:48:41.784673 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:41.787908 ignition[797]: disks: disks passed Jan 20 00:48:41.787999 ignition[797]: Ignition finished successfully Jan 20 00:48:41.833015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:48:41.839949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:48:41.850007 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:48:41.860143 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:48:41.874830 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:48:41.903733 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:48:41.977017 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:48:41.995083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:48:42.019309 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:48:42.367956 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:48:42.370908 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:48:42.376951 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:48:42.409474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:48:42.420672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:48:42.442613 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:48:42.506759 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 20 00:48:42.506803 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:42.506839 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:42.506859 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:42.442738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:48:42.543015 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:42.442795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:48:42.569449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:48:42.586506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:48:42.622291 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:48:42.748840 systemd-networkd[786]: eth0: Gained IPv6LL Jan 20 00:48:43.386220 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:48:43.414914 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:48:43.444689 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:48:43.464453 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:48:43.927457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:48:44.013313 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:48:44.105945 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:48:44.308358 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:48:44.321310 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:44.397904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:48:44.416843 ignition[929]: INFO : Ignition 2.19.0 Jan 20 00:48:44.416843 ignition[929]: INFO : Stage: mount Jan 20 00:48:44.433423 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:44.433423 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:44.433423 ignition[929]: INFO : mount: mount passed Jan 20 00:48:44.433423 ignition[929]: INFO : Ignition finished successfully Jan 20 00:48:44.444888 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:48:44.568292 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:48:44.653618 kernel: hrtimer: interrupt took 2788867 ns Jan 20 00:48:44.732807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:48:44.765642 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 20 00:48:44.776737 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:48:44.776822 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:48:44.776841 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:48:44.979138 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:48:45.134397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:48:45.710457 ignition[959]: INFO : Ignition 2.19.0 Jan 20 00:48:45.710457 ignition[959]: INFO : Stage: files Jan 20 00:48:45.710457 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:45.710457 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:45.710457 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:48:45.763543 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:48:45.763543 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:48:45.791946 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:48:45.807442 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:48:45.807442 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:48:45.797749 unknown[959]: wrote ssh authorized keys file for user: core Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:48:45.845160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 00:48:46.382787 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 20 00:48:50.383414 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:48:50.383414 ignition[959]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 20 00:48:50.418115 ignition[959]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 20 00:48:50.458813 ignition[959]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:48:50.578235 ignition[959]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:48:50.604353 ignition[959]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:48:50.604353 ignition[959]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:48:50.651901 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:48:50.651901 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:48:50.651901 ignition[959]: INFO : files: files passed Jan 20 00:48:50.651901 ignition[959]: INFO : Ignition finished successfully Jan 20 00:48:50.686771 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:48:50.724520 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:48:50.754461 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:48:50.789572 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:48:50.792985 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:48:50.793355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:48:50.835707 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:50.835707 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:50.867666 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:48:50.844259 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:48:50.847916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:48:50.928834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:48:51.014288 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:48:51.018844 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:48:51.026434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:48:51.033230 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:48:51.049931 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:48:51.070938 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:48:51.128722 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:48:51.157443 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:48:51.208101 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:48:51.217210 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:51.235147 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:48:51.251322 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:48:51.251574 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:48:51.272719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:48:51.296208 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:48:51.304214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:48:51.315552 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:48:51.324291 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:48:51.338693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:48:51.354431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:48:51.365826 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:48:51.387191 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:48:51.406714 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:48:51.410722 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:48:51.411016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:48:51.531289 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:48:51.614489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:51.634328 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:48:51.643211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:51.659681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:48:51.659995 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:48:51.693318 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:48:51.693696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:48:51.706850 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:48:51.734699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:48:51.743787 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:51.761670 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:48:51.801528 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:48:51.821287 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:48:51.824577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:48:51.860917 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:48:51.864573 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:48:51.908671 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:48:51.909228 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:48:51.926005 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:48:51.928646 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:48:51.967858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:48:51.984148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:48:51.984792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:52.055692 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:48:52.068725 ignition[1014]: INFO : Ignition 2.19.0 Jan 20 00:48:52.068725 ignition[1014]: INFO : Stage: umount Jan 20 00:48:52.068725 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:48:52.068725 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:48:52.108803 ignition[1014]: INFO : umount: umount passed Jan 20 00:48:52.108803 ignition[1014]: INFO : Ignition finished successfully Jan 20 00:48:52.072870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:48:52.073430 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:52.073807 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:48:52.074029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:48:52.104275 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:48:52.104949 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:48:52.111175 systemd[1]: Stopped target network.target - Network. Jan 20 00:48:52.139581 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:48:52.139783 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:48:52.171750 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:48:52.171862 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:48:52.184701 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:48:52.184824 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:48:52.192842 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:48:52.196552 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:48:52.213498 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:48:52.233232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:48:52.262366 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 20 00:48:52.319178 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:48:52.321404 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:48:52.321630 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:48:52.378335 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:48:52.378651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:48:52.416340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:48:52.416655 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:48:52.439261 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:48:52.439478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:48:52.465765 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:48:52.466523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:52.472482 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:48:52.472652 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:48:52.525363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:48:52.542844 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:48:52.561584 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:48:52.620403 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:48:52.620785 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:52.650857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:48:52.651648 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:52.663376 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:48:52.663526 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:48:52.703977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:48:52.771394 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:48:52.774209 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:48:52.797117 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:48:52.797207 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:52.797323 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:48:52.797374 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:52.797494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:48:52.797564 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:48:52.797818 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:48:52.797893 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:48:52.798116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:48:52.798250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:48:52.800934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:48:52.854725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:48:52.854961 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:48:52.866290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:48:52.866540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:48:52.892276 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:48:52.892481 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:48:52.915690 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:48:52.915892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:48:52.944013 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:48:53.005872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:48:53.224287 systemd[1]: Switching root. Jan 20 00:48:53.301778 systemd-journald[195]: Journal stopped Jan 20 00:48:57.421667 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jan 20 00:48:57.421763 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:48:57.421782 kernel: SELinux: policy capability open_perms=1 Jan 20 00:48:57.421798 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:48:57.421820 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:48:57.421836 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:48:57.421858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:48:57.421874 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:48:57.421889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:48:57.421914 kernel: audit: type=1403 audit(1768870133.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:48:57.421936 systemd[1]: Successfully loaded SELinux policy in 126.283ms. Jan 20 00:48:57.421954 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.034ms. Jan 20 00:48:57.421972 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:48:57.421993 systemd[1]: Detected virtualization kvm. Jan 20 00:48:57.422009 systemd[1]: Detected architecture x86-64. Jan 20 00:48:57.422029 systemd[1]: Detected first boot. Jan 20 00:48:57.422103 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:48:57.422122 zram_generator::config[1075]: No configuration found. Jan 20 00:48:57.422141 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:48:57.422157 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:48:57.422174 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:48:57.422192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:48:57.422211 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:48:57.422232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:48:57.422249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:48:57.422266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:48:57.429790 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:48:57.429831 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:48:57.429850 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:48:57.429867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:48:57.429884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:48:57.429902 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:48:57.429926 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:48:57.429944 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:48:57.429961 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:48:57.429978 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:48:57.429995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:48:57.430012 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:48:57.430084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:48:57.430107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:48:57.430129 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:48:57.430147 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:48:57.430164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:48:57.430184 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:48:57.430201 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:48:57.430218 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:48:57.430235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:48:57.430251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:48:57.430268 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:48:57.430288 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:48:57.430305 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:48:57.430321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:48:57.430338 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:48:57.430356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:57.430372 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:48:57.430389 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:48:57.430406 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:48:57.430451 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:48:57.430469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:48:57.430486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:48:57.430502 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:48:57.430519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:48:57.430554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:48:57.430570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:48:57.430605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:48:57.430653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:48:57.430696 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:48:57.430714 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 20 00:48:57.430731 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 20 00:48:57.430748 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:48:57.430765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:48:57.430781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:48:57.430798 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:48:57.430857 systemd-journald[1175]: Collecting audit messages is disabled. Jan 20 00:48:57.430912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:48:57.430930 systemd-journald[1175]: Journal started Jan 20 00:48:57.430959 systemd-journald[1175]: Runtime Journal (/run/log/journal/1d56db0f13c148c494dcc398801b6026) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:48:57.449108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:48:57.452111 kernel: fuse: init (API version 7.39) Jan 20 00:48:57.465748 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:48:57.482743 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:48:57.487271 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:48:57.491829 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:48:57.505496 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:48:57.518183 kernel: loop: module loaded Jan 20 00:48:57.519400 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:48:57.531033 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:48:57.578951 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:48:57.626680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:48:57.645709 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:48:57.646105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:48:57.659780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:48:57.660152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:48:57.670447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:48:57.670841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:48:57.681104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:48:57.682905 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:48:57.693692 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:48:57.701889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:48:57.714490 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:48:57.760210 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:48:57.807179 kernel: ACPI: bus type drm_connector registered Jan 20 00:48:57.799425 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:48:57.821256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:48:57.843956 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:48:57.851824 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:48:57.886347 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:48:57.895685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:48:57.902348 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:48:57.921524 systemd-journald[1175]: Time spent on flushing to /var/log/journal/1d56db0f13c148c494dcc398801b6026 is 114.692ms for 948 entries. Jan 20 00:48:57.921524 systemd-journald[1175]: System Journal (/var/log/journal/1d56db0f13c148c494dcc398801b6026) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:48:58.131327 systemd-journald[1175]: Received client request to flush runtime journal. Jan 20 00:48:57.922463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:48:57.944344 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:48:57.979197 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:48:57.979492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:48:57.997596 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:48:57.998017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:48:58.007940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:48:58.036959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:48:58.045993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:48:58.055615 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:48:58.081999 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:48:58.098466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:48:58.116428 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:48:58.136960 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:48:58.169583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:48:58.171693 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 20 00:48:58.171739 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 20 00:48:58.183403 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:48:58.194841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:48:58.224425 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:48:58.353430 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:48:58.386317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:48:58.940484 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 20 00:48:58.940511 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 20 00:48:58.969485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:49:00.260914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:49:00.286390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:49:00.382333 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 20 00:49:00.461691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:49:00.523621 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:49:00.604397 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:49:00.670880 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 20 00:49:00.976358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:49:01.011763 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1256) Jan 20 00:49:01.189363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 00:49:01.234589 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:49:01.673126 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:49:01.673558 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:49:01.673913 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:49:01.687577 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:49:01.701521 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 20 00:49:01.727299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:49:01.740698 systemd-networkd[1246]: lo: Link UP Jan 20 00:49:01.740711 systemd-networkd[1246]: lo: Gained carrier Jan 20 00:49:01.747482 systemd-networkd[1246]: Enumeration completed Jan 20 00:49:01.752769 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:49:01.752777 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:49:01.755631 systemd-networkd[1246]: eth0: Link UP Jan 20 00:49:01.755639 systemd-networkd[1246]: eth0: Gained carrier Jan 20 00:49:01.755701 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:49:01.762790 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:49:01.771306 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:49:01.896489 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:49:01.932124 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:49:02.016413 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:49:02.016553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:49:02.017170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:49:02.050479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:49:02.273764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:49:02.620954 kernel: kvm_amd: TSC scaling supported Jan 20 00:49:02.621189 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:49:02.621232 kernel: kvm_amd: Nested Paging enabled Jan 20 00:49:02.625204 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:49:02.632760 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:49:02.910463 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:49:02.979886 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:49:03.019361 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:49:03.047875 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:49:03.137106 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:49:03.145318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:49:03.170645 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:49:03.184937 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:49:03.318949 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 20 00:49:03.360267 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:49:03.382475 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:49:03.430195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:49:03.456940 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:49:03.457172 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:49:03.481896 systemd[1]: Reached target machines.target - Containers. Jan 20 00:49:03.490898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:49:03.523442 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:49:03.600944 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:49:03.612423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:49:03.622932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:49:03.645316 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:49:03.664879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:49:03.678975 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:49:03.726841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:49:03.824838 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:49:03.852614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:49:03.864355 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:49:03.968722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:49:04.039417 kernel: loop1: detected capacity change from 0 to 142488 Jan 20 00:49:04.292810 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 00:49:04.515695 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:49:04.621150 kernel: loop4: detected capacity change from 0 to 142488 Jan 20 00:49:04.699972 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 00:49:04.777763 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:49:04.779963 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 20 00:49:04.809203 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:49:04.809256 systemd[1]: Reloading... Jan 20 00:49:05.031538 zram_generator::config[1341]: No configuration found. Jan 20 00:49:05.579509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:49:05.748259 systemd[1]: Reloading finished in 938 ms. Jan 20 00:49:06.079769 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:49:06.109430 systemd[1]: Starting ensure-sysext.service... Jan 20 00:49:06.122025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:49:06.127803 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:49:06.141565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:49:06.182476 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:49:06.182532 systemd[1]: Reloading... Jan 20 00:49:06.343807 zram_generator::config[1413]: No configuration found. Jan 20 00:49:06.389868 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:49:06.392016 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:49:06.397274 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:49:06.397764 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 20 00:49:06.398236 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Jan 20 00:49:06.408650 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:49:06.408717 systemd-tmpfiles[1387]: Skipping /boot Jan 20 00:49:06.449538 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:49:06.449587 systemd-tmpfiles[1387]: Skipping /boot Jan 20 00:49:06.739849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:49:06.912753 systemd[1]: Reloading finished in 724 ms. Jan 20 00:49:06.952757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:49:06.996515 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:49:07.040348 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:49:07.059524 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:49:07.080311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:49:07.101520 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:49:07.169464 augenrules[1483]: No rules Jan 20 00:49:07.182670 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:49:07.199472 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:49:07.234398 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:49:07.261843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:49:07.262564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:49:07.288780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:49:07.298786 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:49:07.333545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:49:07.344554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:49:07.349451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:49:07.354170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:49:07.356755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:49:07.357316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:49:07.369245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:49:07.369592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:49:07.393229 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:49:07.411348 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:49:07.411980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:49:07.429896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:49:07.430928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:49:07.438530 systemd-resolved[1474]: Positive Trust Anchors: Jan 20 00:49:07.438594 systemd-resolved[1474]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:49:07.438649 systemd-resolved[1474]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:49:07.441345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:49:07.452563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:49:07.454350 systemd-resolved[1474]: Defaulting to hostname 'linux'. Jan 20 00:49:07.465193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:49:07.493136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:49:07.505392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:49:07.505637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:49:07.505824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:49:07.508148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:49:07.522764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:49:07.534525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:49:07.535432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:49:07.544448 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:49:07.544834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:49:07.569006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:49:07.569460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:49:07.586288 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:49:07.586914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:49:07.608290 systemd[1]: Finished ensure-sysext.service. Jan 20 00:49:07.630882 systemd[1]: Reached target network.target - Network. Jan 20 00:49:07.638966 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:49:07.648610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:49:07.658923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:49:07.659170 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:49:07.676031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:49:07.928735 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:49:07.945016 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:49:08.712944 systemd-timesyncd[1524]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:49:08.713164 systemd-timesyncd[1524]: Initial clock synchronization to Tue 2026-01-20 00:49:08.712695 UTC. Jan 20 00:49:08.713438 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:49:08.713753 systemd-resolved[1474]: Clock change detected. Flushing caches. Jan 20 00:49:08.731782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:49:08.749490 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:49:08.763593 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:49:08.763665 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:49:08.775063 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:49:08.788344 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:49:08.804519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:49:08.816431 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:49:08.834509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:49:08.853430 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:49:08.864593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:49:08.882522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:49:08.901697 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:49:08.909273 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:49:08.914985 systemd[1]: System is tainted: cgroupsv1 Jan 20 00:49:08.915168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:49:08.915219 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:49:08.918025 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:49:08.944393 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:49:08.957685 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:49:08.990287 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:49:09.007526 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:49:09.019583 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:49:09.030030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:09.033561 jq[1533]: false Jan 20 00:49:09.052394 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:49:09.088304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:49:09.111264 dbus-daemon[1530]: [system] SELinux support is enabled Jan 20 00:49:09.127456 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:49:09.129638 extend-filesystems[1534]: Found loop3 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found loop4 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found loop5 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found sr0 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda1 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda2 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda3 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found usr Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda4 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda6 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda7 Jan 20 00:49:09.129638 extend-filesystems[1534]: Found vda9 Jan 20 00:49:09.129638 extend-filesystems[1534]: Checking size of /dev/vda9 Jan 20 00:49:09.366503 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:49:09.366552 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1567) Jan 20 00:49:09.371366 extend-filesystems[1534]: Resized partition /dev/vda9 Jan 20 00:49:09.166658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:49:09.375615 extend-filesystems[1559]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:49:09.251272 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:49:09.273552 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:49:09.284312 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:49:09.344563 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:49:09.364164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:49:09.409235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:49:09.409711 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:49:09.446907 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:49:09.447482 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:49:09.478608 jq[1572]: true Jan 20 00:49:09.540218 update_engine[1563]: I20260120 00:49:09.483610 1563 main.cc:92] Flatcar Update Engine starting Jan 20 00:49:09.540218 update_engine[1563]: I20260120 00:49:09.491910 1563 update_check_scheduler.cc:74] Next update check in 8m52s Jan 20 00:49:09.485266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:49:09.496610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:49:09.497309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:49:09.554149 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:49:09.595587 jq[1581]: true Jan 20 00:49:09.651390 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:49:09.664229 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:49:09.664229 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:49:09.664229 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:49:09.718435 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jan 20 00:49:09.673625 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:49:09.684016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:49:09.707047 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:49:09.710400 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:49:09.779676 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:49:09.788008 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:49:09.805014 systemd-logind[1558]: New seat seat0. Jan 20 00:49:09.811297 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:49:09.867325 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:49:09.885445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:49:09.885882 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:49:09.886180 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:49:09.896379 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:49:09.896572 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:49:09.914152 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:49:09.916863 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:49:09.944572 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:49:09.960219 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:49:10.005356 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:49:10.038688 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:49:10.106422 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:49:10.170919 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:49:10.205335 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:49:10.269942 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:49:10.270588 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:49:10.292471 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:49:10.522798 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:49:10.567448 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:49:10.593174 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:49:10.614330 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:49:11.369960 containerd[1582]: time="2026-01-20T00:49:11.367430683Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:49:11.529275 containerd[1582]: time="2026-01-20T00:49:11.529196257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.546873 containerd[1582]: time="2026-01-20T00:49:11.543366354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:49:11.546873 containerd[1582]: time="2026-01-20T00:49:11.544895259Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:49:11.546873 containerd[1582]: time="2026-01-20T00:49:11.545005915Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:49:11.546873 containerd[1582]: time="2026-01-20T00:49:11.545535364Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:49:11.546873 containerd[1582]: time="2026-01-20T00:49:11.545575749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.547929 containerd[1582]: time="2026-01-20T00:49:11.547864312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:49:11.547929 containerd[1582]: time="2026-01-20T00:49:11.547906610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.549422 containerd[1582]: time="2026-01-20T00:49:11.548566773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:49:11.549422 containerd[1582]: time="2026-01-20T00:49:11.548599845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.549422 containerd[1582]: time="2026-01-20T00:49:11.548623449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:49:11.549422 containerd[1582]: time="2026-01-20T00:49:11.548641162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.553013835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.553676643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.554146440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.554177317Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.554421974Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:49:11.554864 containerd[1582]: time="2026-01-20T00:49:11.554561925Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:49:11.586441 containerd[1582]: time="2026-01-20T00:49:11.586217395Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:49:11.586441 containerd[1582]: time="2026-01-20T00:49:11.586420203Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:49:11.586594 containerd[1582]: time="2026-01-20T00:49:11.586458935Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:49:11.586594 containerd[1582]: time="2026-01-20T00:49:11.586485786Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:49:11.586594 containerd[1582]: time="2026-01-20T00:49:11.586510271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:49:11.588635 containerd[1582]: time="2026-01-20T00:49:11.588593561Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:49:11.589550 containerd[1582]: time="2026-01-20T00:49:11.589431124Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:49:11.594115 containerd[1582]: time="2026-01-20T00:49:11.593150218Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:49:11.594115 containerd[1582]: time="2026-01-20T00:49:11.593793849Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:49:11.594325 containerd[1582]: time="2026-01-20T00:49:11.594221928Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:49:11.594325 containerd[1582]: time="2026-01-20T00:49:11.594289785Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594325 containerd[1582]: time="2026-01-20T00:49:11.594316195Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594449 containerd[1582]: time="2026-01-20T00:49:11.594337334Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594449 containerd[1582]: time="2026-01-20T00:49:11.594363493Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594449 containerd[1582]: time="2026-01-20T00:49:11.594389381Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594449 containerd[1582]: time="2026-01-20T00:49:11.594414247Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594449 containerd[1582]: time="2026-01-20T00:49:11.594436068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594456897Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594491782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594515136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594535855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594556022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.594595 containerd[1582]: time="2026-01-20T00:49:11.594576240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594599804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594619811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594642343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594759412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594796320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594817661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594839131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594858848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594884245Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594926534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594950419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.594970576Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.595041408Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:49:11.595520 containerd[1582]: time="2026-01-20T00:49:11.595144110Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595169157Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595199263Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595219531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595240189Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595256179Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:49:11.597046 containerd[1582]: time="2026-01-20T00:49:11.595270947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:49:11.600924 containerd[1582]: time="2026-01-20T00:49:11.597555212Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:49:11.600924 containerd[1582]: time="2026-01-20T00:49:11.598179858Z" level=info msg="Connect containerd service" Jan 20 00:49:11.600924 containerd[1582]: time="2026-01-20T00:49:11.598300493Z" level=info msg="using legacy CRI server" Jan 20 00:49:11.600924 containerd[1582]: time="2026-01-20T00:49:11.598318196Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:49:11.600924 containerd[1582]: time="2026-01-20T00:49:11.598554016Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:49:11.602143 containerd[1582]: time="2026-01-20T00:49:11.602053391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:49:11.602568 containerd[1582]: time="2026-01-20T00:49:11.602483853Z" level=info msg="Start subscribing containerd event" Jan 20 00:49:11.602623 containerd[1582]: time="2026-01-20T00:49:11.602585052Z" level=info msg="Start recovering state" Jan 20 00:49:11.602704 containerd[1582]: time="2026-01-20T00:49:11.602658762Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:49:11.605053 containerd[1582]: time="2026-01-20T00:49:11.604980416Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:49:11.609303 containerd[1582]: time="2026-01-20T00:49:11.606844524Z" level=info msg="Start event monitor" Jan 20 00:49:11.609303 containerd[1582]: time="2026-01-20T00:49:11.606902923Z" level=info msg="Start snapshots syncer" Jan 20 00:49:11.609303 containerd[1582]: time="2026-01-20T00:49:11.606922319Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:49:11.609303 containerd[1582]: time="2026-01-20T00:49:11.606934732Z" level=info msg="Start streaming server" Jan 20 00:49:11.609303 containerd[1582]: time="2026-01-20T00:49:11.607057762Z" level=info msg="containerd successfully booted in 0.253090s" Jan 20 00:49:11.609069 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:49:13.668842 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:49:13.786983 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:42902.service - OpenSSH per-connection server daemon (10.0.0.1:42902). Jan 20 00:49:14.879945 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:14.928663 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:14.969184 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:49:15.003021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:49:15.034935 systemd-logind[1558]: New session 1 of user core. Jan 20 00:49:15.166338 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:49:15.223707 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:49:15.251442 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:49:15.818588 systemd[1662]: Queued start job for default target default.target. Jan 20 00:49:15.820901 systemd[1662]: Created slice app.slice - User Application Slice. Jan 20 00:49:15.820968 systemd[1662]: Reached target paths.target - Paths. Jan 20 00:49:15.820992 systemd[1662]: Reached target timers.target - Timers. Jan 20 00:49:15.846499 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:49:15.866266 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:49:15.866377 systemd[1662]: Reached target sockets.target - Sockets. Jan 20 00:49:15.866403 systemd[1662]: Reached target basic.target - Basic System. Jan 20 00:49:15.866478 systemd[1662]: Reached target default.target - Main User Target. Jan 20 00:49:15.866541 systemd[1662]: Startup finished in 568ms. Jan 20 00:49:15.868581 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:49:15.905591 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:49:16.237201 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:42910.service - OpenSSH per-connection server daemon (10.0.0.1:42910). Jan 20 00:49:16.387824 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 42910 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:16.393316 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:16.415141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:16.424195 systemd-logind[1558]: New session 2 of user core. Jan 20 00:49:16.429677 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:49:16.433899 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:49:16.434705 systemd[1]: Startup finished in 25.428s (kernel) + 21.950s (userspace) = 47.378s. Jan 20 00:49:16.434722 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:49:16.565177 sshd[1678]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:16.584432 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:42910.service: Deactivated successfully. Jan 20 00:49:16.591849 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:49:16.601633 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:49:16.624879 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:42916.service - OpenSSH per-connection server daemon (10.0.0.1:42916). Jan 20 00:49:16.630961 systemd-logind[1558]: Removed session 2. Jan 20 00:49:17.083915 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 42916 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:17.095259 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:17.320021 systemd-logind[1558]: New session 3 of user core. Jan 20 00:49:17.339913 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:49:17.427418 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:17.454644 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Jan 20 00:49:17.455584 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:42916.service: Deactivated successfully. Jan 20 00:49:17.475016 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:49:17.475510 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:49:17.489367 systemd-logind[1558]: Removed session 3. Jan 20 00:49:17.531292 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:17.535695 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:17.570153 systemd-logind[1558]: New session 4 of user core. Jan 20 00:49:17.585907 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:49:17.741788 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:17.749685 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:42918.service: Deactivated successfully. Jan 20 00:49:17.758373 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:49:17.798574 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:42930.service - OpenSSH per-connection server daemon (10.0.0.1:42930). Jan 20 00:49:17.799219 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:49:17.808377 systemd-logind[1558]: Removed session 4. Jan 20 00:49:17.913448 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 42930 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:17.921048 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:17.954918 systemd-logind[1558]: New session 5 of user core. Jan 20 00:49:17.967884 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:49:18.119660 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:49:18.120295 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:49:18.164504 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:18.172650 sshd[1716]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:18.200559 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:42932.service - OpenSSH per-connection server daemon (10.0.0.1:42932). Jan 20 00:49:18.201793 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:42930.service: Deactivated successfully. Jan 20 00:49:18.207900 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:49:18.219015 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:49:18.225941 systemd-logind[1558]: Removed session 5. Jan 20 00:49:18.341789 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 42932 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:18.348165 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:18.367701 systemd-logind[1558]: New session 6 of user core. Jan 20 00:49:18.387675 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:49:18.519852 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:49:18.524340 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:49:18.551665 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:18.575720 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:49:18.577516 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:49:18.814511 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:49:18.886821 auditctl[1735]: No rules Jan 20 00:49:18.888823 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:49:18.889396 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:49:18.924636 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:49:19.191328 augenrules[1754]: No rules Jan 20 00:49:19.195601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:49:19.202820 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:19.212311 sshd[1722]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:19.226152 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:42932.service: Deactivated successfully. Jan 20 00:49:19.240257 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:49:19.241619 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:49:19.261500 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:42938.service - OpenSSH per-connection server daemon (10.0.0.1:42938). Jan 20 00:49:19.263217 systemd-logind[1558]: Removed session 6. Jan 20 00:49:19.346471 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 42938 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:49:19.349703 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:49:19.368908 systemd-logind[1558]: New session 7 of user core. Jan 20 00:49:19.396703 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:49:19.502702 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:49:19.505431 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:49:19.514516 kubelet[1688]: E0120 00:49:19.511678 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:49:19.529713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:49:19.530510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:49:19.578661 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:49:19.705677 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:49:19.706560 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:49:23.375761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:23.393554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:23.476666 systemd[1]: Reloading requested from client PID 1816 ('systemctl') (unit session-7.scope)... Jan 20 00:49:23.476689 systemd[1]: Reloading... Jan 20 00:49:23.798038 zram_generator::config[1854]: No configuration found. Jan 20 00:49:24.199714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:49:24.515644 systemd[1]: Reloading finished in 1038 ms. Jan 20 00:49:24.683292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:24.694050 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:49:24.694710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:24.704699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:49:25.150612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:49:25.187926 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:49:26.048943 kubelet[1917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:26.048943 kubelet[1917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:49:26.048943 kubelet[1917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:49:26.048943 kubelet[1917]: I0120 00:49:26.048838 1917 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:49:28.343409 kubelet[1917]: I0120 00:49:28.341728 1917 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:49:28.349923 kubelet[1917]: I0120 00:49:28.346483 1917 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:49:28.351875 kubelet[1917]: I0120 00:49:28.350310 1917 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:49:28.464749 kubelet[1917]: I0120 00:49:28.464381 1917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:49:28.498884 kubelet[1917]: E0120 00:49:28.498841 1917 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:49:28.498884 kubelet[1917]: I0120 00:49:28.498899 1917 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:49:28.525141 kubelet[1917]: I0120 00:49:28.524948 1917 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:49:28.530882 kubelet[1917]: I0120 00:49:28.529307 1917 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:49:28.530882 kubelet[1917]: I0120 00:49:28.529576 1917 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 20 00:49:28.530882 kubelet[1917]: I0120 00:49:28.530037 1917 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:49:28.530882 kubelet[1917]: I0120 00:49:28.530053 1917 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:49:28.531488 kubelet[1917]: I0120 00:49:28.530481 1917 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:28.544615 kubelet[1917]: I0120 00:49:28.543595 1917 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:49:28.544615 kubelet[1917]: I0120 00:49:28.543762 1917 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:49:28.544615 kubelet[1917]: I0120 00:49:28.543896 1917 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:49:28.544615 kubelet[1917]: I0120 00:49:28.543954 1917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:49:28.547710 kubelet[1917]: E0120 00:49:28.546056 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:28.547877 kubelet[1917]: E0120 00:49:28.547744 1917 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:28.561143 kubelet[1917]: I0120 00:49:28.560994 1917 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:49:28.562036 kubelet[1917]: I0120 00:49:28.561929 1917 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:49:28.562773 kubelet[1917]: W0120 00:49:28.562167 1917 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:49:28.569591 kubelet[1917]: I0120 00:49:28.569356 1917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:49:28.569652 kubelet[1917]: I0120 00:49:28.569621 1917 server.go:1287] "Started kubelet" Jan 20 00:49:28.570526 kubelet[1917]: I0120 00:49:28.570393 1917 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:49:28.573114 kubelet[1917]: I0120 00:49:28.571865 1917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:49:28.573114 kubelet[1917]: I0120 00:49:28.572592 1917 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:49:28.579541 kubelet[1917]: I0120 00:49:28.576326 1917 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:49:28.579541 kubelet[1917]: I0120 00:49:28.578721 1917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:49:28.585394 kubelet[1917]: I0120 00:49:28.583868 1917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:49:28.589045 kubelet[1917]: W0120 00:49:28.587695 1917 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 20 00:49:28.589045 kubelet[1917]: E0120 00:49:28.587762 1917 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 20 00:49:28.589045 kubelet[1917]: W0120 00:49:28.588066 1917 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.99" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 20 00:49:28.589045 kubelet[1917]: E0120 00:49:28.588183 1917 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.99\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 20 00:49:28.591365 kubelet[1917]: E0120 00:49:28.590422 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:28.591365 kubelet[1917]: I0120 00:49:28.590520 1917 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:49:28.591365 kubelet[1917]: I0120 00:49:28.590838 1917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:49:28.591365 kubelet[1917]: I0120 00:49:28.590913 1917 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:49:28.594250 kubelet[1917]: I0120 00:49:28.593384 1917 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:49:28.594250 kubelet[1917]: I0120 00:49:28.593534 1917 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:49:28.600167 kubelet[1917]: E0120 00:49:28.599711 1917 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:49:28.600274 kubelet[1917]: I0120 00:49:28.600227 1917 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:49:28.601580 kubelet[1917]: W0120 00:49:28.601555 1917 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 20 00:49:28.608913 kubelet[1917]: E0120 00:49:28.603992 1917 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 20 00:49:28.631306 kubelet[1917]: E0120 00:49:28.631254 1917 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.99\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 20 00:49:28.759392 kubelet[1917]: E0120 00:49:28.758605 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:28.781437 kubelet[1917]: E0120 00:49:28.630687 1917 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.99.188c4a0be35ddcfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.99,UID:10.0.0.99,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.99,},FirstTimestamp:2026-01-20 00:49:28.569437436 +0000 UTC m=+3.298830155,LastTimestamp:2026-01-20 00:49:28.569437436 +0000 UTC m=+3.298830155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.99,}" Jan 20 00:49:28.853455 kubelet[1917]: I0120 00:49:28.851617 1917 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:49:28.853455 kubelet[1917]: I0120 00:49:28.851644 1917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:49:28.853455 kubelet[1917]: I0120 00:49:28.851671 1917 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:49:28.859414 kubelet[1917]: E0120 00:49:28.859217 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:28.862446 kubelet[1917]: I0120 00:49:28.862410 1917 policy_none.go:49] "None policy: Start" Jan 20 00:49:28.863273 kubelet[1917]: I0120 00:49:28.862653 1917 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:49:28.863273 kubelet[1917]: I0120 00:49:28.862718 1917 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:49:28.869468 kubelet[1917]: E0120 00:49:28.869373 1917 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.99\" not found" node="10.0.0.99" Jan 20 00:49:28.892925 kubelet[1917]: I0120 00:49:28.888605 1917 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:49:28.892925 kubelet[1917]: I0120 00:49:28.888973 1917 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:49:28.892925 kubelet[1917]: I0120 00:49:28.888993 1917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:49:28.892925 kubelet[1917]: I0120 00:49:28.892585 1917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:49:28.895590 kubelet[1917]: E0120 00:49:28.895564 1917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:49:28.896116 kubelet[1917]: E0120 00:49:28.896050 1917 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.99\" not found" Jan 20 00:49:28.991378 kubelet[1917]: I0120 00:49:28.991254 1917 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.99" Jan 20 00:49:29.016717 kubelet[1917]: I0120 00:49:29.016585 1917 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.99" Jan 20 00:49:29.016717 kubelet[1917]: E0120 00:49:29.016657 1917 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.99\": node \"10.0.0.99\" not found" Jan 20 00:49:29.051545 kubelet[1917]: I0120 00:49:29.050943 1917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:49:29.058922 kubelet[1917]: I0120 00:49:29.056896 1917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:49:29.059294 kubelet[1917]: I0120 00:49:29.059015 1917 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:49:29.059294 kubelet[1917]: I0120 00:49:29.059055 1917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:49:29.059294 kubelet[1917]: I0120 00:49:29.059067 1917 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:49:29.059294 kubelet[1917]: E0120 00:49:29.059275 1917 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 20 00:49:29.077357 kubelet[1917]: E0120 00:49:29.077330 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.180356 kubelet[1917]: E0120 00:49:29.179981 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.280517 kubelet[1917]: E0120 00:49:29.280197 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.371753 kubelet[1917]: I0120 00:49:29.370987 1917 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 00:49:29.371753 kubelet[1917]: W0120 00:49:29.371400 1917 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 20 00:49:29.381588 kubelet[1917]: E0120 00:49:29.380863 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.482905 kubelet[1917]: E0120 00:49:29.482547 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.588710 kubelet[1917]: E0120 00:49:29.579645 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:29.588710 kubelet[1917]: E0120 00:49:29.587199 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.692135 kubelet[1917]: E0120 00:49:29.691762 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.796031 kubelet[1917]: E0120 00:49:29.791976 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.874753 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 20 00:49:29.894231 sshd[1763]: pam_unix(sshd:session): session closed for user core Jan 20 00:49:29.896379 kubelet[1917]: E0120 00:49:29.896151 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:29.942704 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:42938.service: Deactivated successfully. Jan 20 00:49:29.959515 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:49:29.963378 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:49:29.969349 systemd-logind[1558]: Removed session 7. Jan 20 00:49:30.017970 kubelet[1917]: E0120 00:49:30.017217 1917 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jan 20 00:49:30.133702 kubelet[1917]: I0120 00:49:30.131319 1917 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 20 00:49:30.134324 containerd[1582]: time="2026-01-20T00:49:30.132723271Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:49:30.137528 kubelet[1917]: I0120 00:49:30.135721 1917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 20 00:49:30.561023 kubelet[1917]: I0120 00:49:30.560781 1917 apiserver.go:52] "Watching apiserver" Jan 20 00:49:30.582033 kubelet[1917]: E0120 00:49:30.579521 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:30.589469 kubelet[1917]: E0120 00:49:30.588266 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:30.593899 kubelet[1917]: I0120 00:49:30.593752 1917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:49:30.634669 kubelet[1917]: I0120 00:49:30.629241 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55f4be88-76ba-43d0-8343-cbf374a5a1ba-tigera-ca-bundle\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.634669 kubelet[1917]: I0120 00:49:30.629309 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-policysync\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.634669 kubelet[1917]: I0120 00:49:30.629347 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e-kubelet-dir\") pod \"csi-node-driver-zmzrd\" (UID: \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\") " pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:49:30.634669 kubelet[1917]: I0120 00:49:30.629402 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e-varrun\") pod \"csi-node-driver-zmzrd\" (UID: \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\") " pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:49:30.634669 kubelet[1917]: I0120 00:49:30.629444 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36cd25b4-5681-462c-8301-d3033c2d1ded-xtables-lock\") pod \"kube-proxy-pn7vk\" (UID: \"36cd25b4-5681-462c-8301-d3033c2d1ded\") " pod="kube-system/kube-proxy-pn7vk" Jan 20 00:49:30.635029 kubelet[1917]: I0120 00:49:30.629470 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-cni-log-dir\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635029 kubelet[1917]: I0120 00:49:30.629494 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-flexvol-driver-host\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635029 kubelet[1917]: I0120 00:49:30.629518 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-lib-modules\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635029 kubelet[1917]: I0120 00:49:30.629546 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55f4be88-76ba-43d0-8343-cbf374a5a1ba-node-certs\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635029 kubelet[1917]: I0120 00:49:30.629576 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-var-lib-calico\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635305 kubelet[1917]: I0120 00:49:30.629604 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-var-run-calico\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635305 kubelet[1917]: I0120 00:49:30.629628 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-xtables-lock\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635305 kubelet[1917]: I0120 00:49:30.629666 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e-registration-dir\") pod \"csi-node-driver-zmzrd\" (UID: \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\") " pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:49:30.635305 kubelet[1917]: I0120 00:49:30.629692 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqpzv\" (UniqueName: \"kubernetes.io/projected/9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e-kube-api-access-pqpzv\") pod \"csi-node-driver-zmzrd\" (UID: \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\") " pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:49:30.635305 kubelet[1917]: I0120 00:49:30.629715 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36cd25b4-5681-462c-8301-d3033c2d1ded-lib-modules\") pod \"kube-proxy-pn7vk\" (UID: \"36cd25b4-5681-462c-8301-d3033c2d1ded\") " pod="kube-system/kube-proxy-pn7vk" Jan 20 00:49:30.635459 kubelet[1917]: I0120 00:49:30.629741 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fqwc\" (UniqueName: \"kubernetes.io/projected/36cd25b4-5681-462c-8301-d3033c2d1ded-kube-api-access-8fqwc\") pod \"kube-proxy-pn7vk\" (UID: \"36cd25b4-5681-462c-8301-d3033c2d1ded\") " pod="kube-system/kube-proxy-pn7vk" Jan 20 00:49:30.635459 kubelet[1917]: I0120 00:49:30.629768 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-cni-bin-dir\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635459 kubelet[1917]: I0120 00:49:30.629800 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55f4be88-76ba-43d0-8343-cbf374a5a1ba-cni-net-dir\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.635459 kubelet[1917]: I0120 00:49:30.629878 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e-socket-dir\") pod \"csi-node-driver-zmzrd\" (UID: \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\") " pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:49:30.635459 kubelet[1917]: I0120 00:49:30.629913 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36cd25b4-5681-462c-8301-d3033c2d1ded-kube-proxy\") pod \"kube-proxy-pn7vk\" (UID: \"36cd25b4-5681-462c-8301-d3033c2d1ded\") " pod="kube-system/kube-proxy-pn7vk" Jan 20 00:49:30.635617 kubelet[1917]: I0120 00:49:30.629942 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9qn\" (UniqueName: \"kubernetes.io/projected/55f4be88-76ba-43d0-8343-cbf374a5a1ba-kube-api-access-5x9qn\") pod \"calico-node-lr5cg\" (UID: \"55f4be88-76ba-43d0-8343-cbf374a5a1ba\") " pod="calico-system/calico-node-lr5cg" Jan 20 00:49:30.738611 kubelet[1917]: E0120 00:49:30.738455 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.738611 kubelet[1917]: W0120 00:49:30.738511 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.738888 kubelet[1917]: E0120 00:49:30.738634 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.739379 kubelet[1917]: E0120 00:49:30.739145 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.739379 kubelet[1917]: W0120 00:49:30.739168 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.739379 kubelet[1917]: E0120 00:49:30.739292 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.739636 kubelet[1917]: E0120 00:49:30.739525 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.739636 kubelet[1917]: W0120 00:49:30.739541 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.740068 kubelet[1917]: E0120 00:49:30.739796 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.741942 kubelet[1917]: E0120 00:49:30.741330 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.741942 kubelet[1917]: W0120 00:49:30.741366 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.741942 kubelet[1917]: E0120 00:49:30.741696 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.741942 kubelet[1917]: W0120 00:49:30.741709 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.743367 kubelet[1917]: E0120 00:49:30.743333 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.743460 kubelet[1917]: E0120 00:49:30.743374 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.748939 kubelet[1917]: E0120 00:49:30.748021 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.748939 kubelet[1917]: W0120 00:49:30.748114 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.748939 kubelet[1917]: E0120 00:49:30.748518 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.748939 kubelet[1917]: W0120 00:49:30.748533 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.752140 kubelet[1917]: E0120 00:49:30.749443 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.752140 kubelet[1917]: E0120 00:49:30.749476 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.762041 kubelet[1917]: E0120 00:49:30.761691 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.762041 kubelet[1917]: W0120 00:49:30.761732 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.764542 kubelet[1917]: E0120 00:49:30.762163 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.767138 kubelet[1917]: E0120 00:49:30.766508 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.767138 kubelet[1917]: W0120 00:49:30.766568 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.767138 kubelet[1917]: E0120 00:49:30.766703 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.767536 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.768909 kubelet[1917]: W0120 00:49:30.767588 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.768050 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.768909 kubelet[1917]: W0120 00:49:30.768065 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.768486 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.768909 kubelet[1917]: W0120 00:49:30.768500 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.768642 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.768674 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.768909 kubelet[1917]: E0120 00:49:30.768690 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.772668 kubelet[1917]: E0120 00:49:30.770735 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.772668 kubelet[1917]: W0120 00:49:30.771540 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.774277 kubelet[1917]: E0120 00:49:30.774027 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.774844 kubelet[1917]: E0120 00:49:30.774547 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.774844 kubelet[1917]: W0120 00:49:30.774580 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.774844 kubelet[1917]: E0120 00:49:30.774668 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.775782 kubelet[1917]: E0120 00:49:30.775255 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.775782 kubelet[1917]: W0120 00:49:30.775478 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.776512 kubelet[1917]: E0120 00:49:30.776346 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.777523 kubelet[1917]: E0120 00:49:30.776739 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.777523 kubelet[1917]: W0120 00:49:30.776955 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.778829 kubelet[1917]: E0120 00:49:30.778640 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.779492 kubelet[1917]: E0120 00:49:30.779210 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.779492 kubelet[1917]: W0120 00:49:30.779225 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.779492 kubelet[1917]: E0120 00:49:30.779318 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.783317 kubelet[1917]: E0120 00:49:30.783175 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.783317 kubelet[1917]: W0120 00:49:30.783225 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.783533 kubelet[1917]: E0120 00:49:30.783511 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.783991 kubelet[1917]: E0120 00:49:30.783838 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.784153 kubelet[1917]: W0120 00:49:30.783998 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.784902 kubelet[1917]: E0120 00:49:30.784797 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.786326 kubelet[1917]: E0120 00:49:30.786231 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.786326 kubelet[1917]: W0120 00:49:30.786299 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.786721 kubelet[1917]: E0120 00:49:30.786553 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.790160 kubelet[1917]: E0120 00:49:30.788185 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.790160 kubelet[1917]: W0120 00:49:30.788212 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.790708 kubelet[1917]: E0120 00:49:30.790636 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.790708 kubelet[1917]: E0120 00:49:30.790681 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.790708 kubelet[1917]: W0120 00:49:30.790698 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.792956 kubelet[1917]: E0120 00:49:30.790724 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.801528 kubelet[1917]: E0120 00:49:30.801445 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.801528 kubelet[1917]: W0120 00:49:30.801502 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.801528 kubelet[1917]: E0120 00:49:30.801537 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.807863 kubelet[1917]: E0120 00:49:30.807801 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.807863 kubelet[1917]: W0120 00:49:30.807859 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.808208 kubelet[1917]: E0120 00:49:30.807890 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.813855 kubelet[1917]: E0120 00:49:30.811869 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:30.813855 kubelet[1917]: W0120 00:49:30.812003 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:30.813855 kubelet[1917]: E0120 00:49:30.812032 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:30.893603 kubelet[1917]: E0120 00:49:30.891454 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:30.900437 containerd[1582]: time="2026-01-20T00:49:30.900287701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr5cg,Uid:55f4be88-76ba-43d0-8343-cbf374a5a1ba,Namespace:calico-system,Attempt:0,}" Jan 20 00:49:30.925194 kubelet[1917]: E0120 00:49:30.923150 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:30.925371 containerd[1582]: time="2026-01-20T00:49:30.924353340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pn7vk,Uid:36cd25b4-5681-462c-8301-d3033c2d1ded,Namespace:kube-system,Attempt:0,}" Jan 20 00:49:31.588898 kubelet[1917]: E0120 00:49:31.588740 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:32.064950 kubelet[1917]: E0120 00:49:32.062045 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:32.097459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606130253.mount: Deactivated successfully. Jan 20 00:49:32.131909 containerd[1582]: time="2026-01-20T00:49:32.131661770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:32.141272 containerd[1582]: time="2026-01-20T00:49:32.141218783Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:32.146718 containerd[1582]: time="2026-01-20T00:49:32.146590582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:49:32.153008 containerd[1582]: time="2026-01-20T00:49:32.151214391Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:32.156884 containerd[1582]: time="2026-01-20T00:49:32.156689155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:49:32.173676 containerd[1582]: time="2026-01-20T00:49:32.173565889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:49:32.175356 containerd[1582]: time="2026-01-20T00:49:32.175233393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.250780767s" Jan 20 00:49:32.185992 containerd[1582]: time="2026-01-20T00:49:32.185344724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.284674609s" Jan 20 00:49:32.591123 kubelet[1917]: E0120 00:49:32.590713 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:33.201313 containerd[1582]: time="2026-01-20T00:49:33.197634272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:33.201313 containerd[1582]: time="2026-01-20T00:49:33.197725603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:33.201313 containerd[1582]: time="2026-01-20T00:49:33.197771269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:33.201313 containerd[1582]: time="2026-01-20T00:49:33.201954067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:33.346783 containerd[1582]: time="2026-01-20T00:49:33.346488103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:49:33.346783 containerd[1582]: time="2026-01-20T00:49:33.346641340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:49:33.346783 containerd[1582]: time="2026-01-20T00:49:33.346668320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:33.346783 containerd[1582]: time="2026-01-20T00:49:33.350340341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:49:33.619658 kubelet[1917]: E0120 00:49:33.615774 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:34.035946 containerd[1582]: time="2026-01-20T00:49:34.035699742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pn7vk,Uid:36cd25b4-5681-462c-8301-d3033c2d1ded,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cb5110aaa3a9be8b40fd4a59800ab2170ed4fc52e3d87e8834e3c13e3c1d603\"" Jan 20 00:49:34.060043 containerd[1582]: time="2026-01-20T00:49:34.045314399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr5cg,Uid:55f4be88-76ba-43d0-8343-cbf374a5a1ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\"" Jan 20 00:49:34.061871 kubelet[1917]: E0120 00:49:34.058540 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:34.061871 kubelet[1917]: E0120 00:49:34.059207 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:34.063262 kubelet[1917]: E0120 00:49:34.062243 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:34.066130 containerd[1582]: time="2026-01-20T00:49:34.066037953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 00:49:34.619431 kubelet[1917]: E0120 00:49:34.618168 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:35.624442 kubelet[1917]: E0120 00:49:35.623797 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:36.060543 kubelet[1917]: E0120 00:49:36.059740 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:36.627365 kubelet[1917]: E0120 00:49:36.626630 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:37.628528 kubelet[1917]: E0120 00:49:37.628239 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:38.060838 kubelet[1917]: E0120 00:49:38.060346 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:38.630394 kubelet[1917]: E0120 00:49:38.629760 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:38.877763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62749927.mount: Deactivated successfully. Jan 20 00:49:39.631798 kubelet[1917]: E0120 00:49:39.630973 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:40.061059 kubelet[1917]: E0120 00:49:40.060986 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:40.632500 kubelet[1917]: E0120 00:49:40.632138 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:41.635445 kubelet[1917]: E0120 00:49:41.635143 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:42.060921 kubelet[1917]: E0120 00:49:42.060382 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:42.344497 containerd[1582]: time="2026-01-20T00:49:42.343606387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:42.349204 containerd[1582]: time="2026-01-20T00:49:42.348792839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 00:49:42.356217 containerd[1582]: time="2026-01-20T00:49:42.353797080Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:42.363605 containerd[1582]: time="2026-01-20T00:49:42.363485889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:42.366980 containerd[1582]: time="2026-01-20T00:49:42.366895111Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 8.300477351s" Jan 20 00:49:42.367155 containerd[1582]: time="2026-01-20T00:49:42.366976975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 00:49:42.376580 containerd[1582]: time="2026-01-20T00:49:42.376177034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:49:42.380915 containerd[1582]: time="2026-01-20T00:49:42.378149707Z" level=info msg="CreateContainer within sandbox \"0cb5110aaa3a9be8b40fd4a59800ab2170ed4fc52e3d87e8834e3c13e3c1d603\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:49:42.459242 containerd[1582]: time="2026-01-20T00:49:42.459039450Z" level=info msg="CreateContainer within sandbox \"0cb5110aaa3a9be8b40fd4a59800ab2170ed4fc52e3d87e8834e3c13e3c1d603\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37ee00ba47075e873d544c5cdefc28be73a04a030d554200f109e788150497fe\"" Jan 20 00:49:42.467144 containerd[1582]: time="2026-01-20T00:49:42.464146572Z" level=info msg="StartContainer for \"37ee00ba47075e873d544c5cdefc28be73a04a030d554200f109e788150497fe\"" Jan 20 00:49:42.635940 kubelet[1917]: E0120 00:49:42.635680 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:43.642130 kubelet[1917]: E0120 00:49:43.641884 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:43.906625 containerd[1582]: time="2026-01-20T00:49:43.903913562Z" level=info msg="StartContainer for \"37ee00ba47075e873d544c5cdefc28be73a04a030d554200f109e788150497fe\" returns successfully" Jan 20 00:49:44.060757 kubelet[1917]: E0120 00:49:44.060002 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:44.071267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1286787591.mount: Deactivated successfully. Jan 20 00:49:44.207428 kubelet[1917]: E0120 00:49:44.205385 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:44.293915 kubelet[1917]: E0120 00:49:44.293610 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.293915 kubelet[1917]: W0120 00:49:44.293652 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.293915 kubelet[1917]: E0120 00:49:44.293687 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.295924 kubelet[1917]: E0120 00:49:44.294936 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.295924 kubelet[1917]: W0120 00:49:44.294957 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.295924 kubelet[1917]: E0120 00:49:44.294982 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.296639 kubelet[1917]: E0120 00:49:44.296619 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.296857 kubelet[1917]: W0120 00:49:44.296753 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.297146 kubelet[1917]: E0120 00:49:44.296781 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.301281 kubelet[1917]: E0120 00:49:44.301251 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.301579 kubelet[1917]: W0120 00:49:44.301447 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.302124 kubelet[1917]: E0120 00:49:44.301832 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.304460 kubelet[1917]: E0120 00:49:44.304404 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.304704 kubelet[1917]: W0120 00:49:44.304612 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.304913 kubelet[1917]: E0120 00:49:44.304849 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.305972 kubelet[1917]: E0120 00:49:44.305801 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.305972 kubelet[1917]: W0120 00:49:44.305888 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.305972 kubelet[1917]: E0120 00:49:44.305910 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.308374 kubelet[1917]: E0120 00:49:44.308196 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.308374 kubelet[1917]: W0120 00:49:44.308215 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.308374 kubelet[1917]: E0120 00:49:44.308240 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.310661 kubelet[1917]: E0120 00:49:44.310641 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.310661 kubelet[1917]: W0120 00:49:44.310750 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.310661 kubelet[1917]: E0120 00:49:44.310773 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.314555 kubelet[1917]: E0120 00:49:44.314204 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.314555 kubelet[1917]: W0120 00:49:44.314298 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.314555 kubelet[1917]: E0120 00:49:44.314322 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.317346 kubelet[1917]: E0120 00:49:44.317270 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.317684 kubelet[1917]: W0120 00:49:44.317287 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.317684 kubelet[1917]: E0120 00:49:44.317595 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.323176 kubelet[1917]: E0120 00:49:44.322326 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.323176 kubelet[1917]: W0120 00:49:44.323029 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.323176 kubelet[1917]: E0120 00:49:44.323063 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.326352 kubelet[1917]: E0120 00:49:44.326198 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.326352 kubelet[1917]: W0120 00:49:44.326316 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.326352 kubelet[1917]: E0120 00:49:44.326342 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.329316 kubelet[1917]: E0120 00:49:44.329171 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.329316 kubelet[1917]: W0120 00:49:44.329244 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.329316 kubelet[1917]: E0120 00:49:44.329310 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.331560 kubelet[1917]: E0120 00:49:44.330767 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.331560 kubelet[1917]: W0120 00:49:44.330790 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.331560 kubelet[1917]: E0120 00:49:44.330813 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.334134 kubelet[1917]: E0120 00:49:44.332378 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.334134 kubelet[1917]: W0120 00:49:44.332418 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.334134 kubelet[1917]: E0120 00:49:44.332443 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.335567 kubelet[1917]: E0120 00:49:44.334875 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.335567 kubelet[1917]: W0120 00:49:44.335335 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.336753 kubelet[1917]: E0120 00:49:44.336049 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.337962 kubelet[1917]: E0120 00:49:44.337622 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.337962 kubelet[1917]: W0120 00:49:44.337713 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.337962 kubelet[1917]: E0120 00:49:44.337734 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.340290 kubelet[1917]: E0120 00:49:44.339935 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.340290 kubelet[1917]: W0120 00:49:44.340134 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.341626 kubelet[1917]: E0120 00:49:44.341135 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.342556 kubelet[1917]: E0120 00:49:44.342498 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.342791 kubelet[1917]: W0120 00:49:44.342630 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.342791 kubelet[1917]: E0120 00:49:44.342654 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.343806 kubelet[1917]: E0120 00:49:44.343787 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.344233 kubelet[1917]: W0120 00:49:44.343928 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.344233 kubelet[1917]: E0120 00:49:44.343954 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.344572 kubelet[1917]: E0120 00:49:44.344513 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.344656 kubelet[1917]: W0120 00:49:44.344640 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.344730 kubelet[1917]: E0120 00:49:44.344715 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.345266 kubelet[1917]: E0120 00:49:44.345248 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.345719 kubelet[1917]: W0120 00:49:44.345568 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.345719 kubelet[1917]: E0120 00:49:44.345593 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.345994 kubelet[1917]: E0120 00:49:44.345980 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.346240 kubelet[1917]: W0120 00:49:44.346053 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.346240 kubelet[1917]: E0120 00:49:44.346131 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.346565 kubelet[1917]: E0120 00:49:44.346509 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.346777 kubelet[1917]: W0120 00:49:44.346641 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.346876 kubelet[1917]: E0120 00:49:44.346859 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.347188 kubelet[1917]: E0120 00:49:44.347154 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.347188 kubelet[1917]: W0120 00:49:44.347169 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.347504 kubelet[1917]: E0120 00:49:44.347323 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.347933 kubelet[1917]: E0120 00:49:44.347821 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.347933 kubelet[1917]: W0120 00:49:44.347854 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.347933 kubelet[1917]: E0120 00:49:44.347871 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.350795 kubelet[1917]: E0120 00:49:44.348658 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.350795 kubelet[1917]: W0120 00:49:44.348690 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.350795 kubelet[1917]: E0120 00:49:44.349146 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.350795 kubelet[1917]: E0120 00:49:44.349942 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.350795 kubelet[1917]: W0120 00:49:44.349954 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.350795 kubelet[1917]: E0120 00:49:44.350011 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.351779 kubelet[1917]: E0120 00:49:44.351475 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.351779 kubelet[1917]: W0120 00:49:44.351564 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.352439 kubelet[1917]: E0120 00:49:44.352011 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.352854 kubelet[1917]: E0120 00:49:44.352820 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.352854 kubelet[1917]: W0120 00:49:44.352834 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.353474 kubelet[1917]: E0120 00:49:44.352983 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.354437 kubelet[1917]: E0120 00:49:44.354418 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.354567 kubelet[1917]: W0120 00:49:44.354510 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.354748 kubelet[1917]: E0120 00:49:44.354728 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.355190 kubelet[1917]: E0120 00:49:44.355170 1917 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:49:44.355276 kubelet[1917]: W0120 00:49:44.355257 1917 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:49:44.355385 kubelet[1917]: E0120 00:49:44.355363 1917 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:49:44.653749 kubelet[1917]: E0120 00:49:44.642355 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:44.685705 containerd[1582]: time="2026-01-20T00:49:44.683715621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:44.690720 containerd[1582]: time="2026-01-20T00:49:44.690604328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 20 00:49:44.690857 containerd[1582]: time="2026-01-20T00:49:44.690769301Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:44.703673 containerd[1582]: time="2026-01-20T00:49:44.702939985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:44.705564 containerd[1582]: time="2026-01-20T00:49:44.704291697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.328032579s" Jan 20 00:49:44.705564 containerd[1582]: time="2026-01-20T00:49:44.704352706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:49:44.716915 containerd[1582]: time="2026-01-20T00:49:44.716383477Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:49:44.775227 containerd[1582]: time="2026-01-20T00:49:44.774921741Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955\"" Jan 20 00:49:44.776139 containerd[1582]: time="2026-01-20T00:49:44.775992650Z" level=info msg="StartContainer for \"37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955\"" Jan 20 00:49:45.027428 containerd[1582]: time="2026-01-20T00:49:45.027374589Z" level=info msg="StartContainer for \"37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955\" returns successfully" Jan 20 00:49:45.143365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955-rootfs.mount: Deactivated successfully. Jan 20 00:49:45.222573 kubelet[1917]: E0120 00:49:45.220921 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:45.222573 kubelet[1917]: E0120 00:49:45.221853 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:45.305940 kubelet[1917]: I0120 00:49:45.305729 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pn7vk" podStartSLOduration=7.998814834 podStartE2EDuration="16.305683774s" podCreationTimestamp="2026-01-20 00:49:29 +0000 UTC" firstStartedPulling="2026-01-20 00:49:34.065311897 +0000 UTC m=+8.794704616" lastFinishedPulling="2026-01-20 00:49:42.372180838 +0000 UTC m=+17.101573556" observedRunningTime="2026-01-20 00:49:44.309864029 +0000 UTC m=+19.039256758" watchObservedRunningTime="2026-01-20 00:49:45.305683774 +0000 UTC m=+20.035076494" Jan 20 00:49:45.497879 containerd[1582]: time="2026-01-20T00:49:45.497757978Z" level=info msg="shim disconnected" id=37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955 namespace=k8s.io Jan 20 00:49:45.497879 containerd[1582]: time="2026-01-20T00:49:45.497872531Z" level=warning msg="cleaning up after shim disconnected" id=37e0963a2d4ff9306026ff60ccd6caf70ef9b13d13bdc1fb9f5704668bf9a955 namespace=k8s.io Jan 20 00:49:45.501927 containerd[1582]: time="2026-01-20T00:49:45.497891786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:49:45.647996 kubelet[1917]: E0120 00:49:45.647735 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:46.059905 kubelet[1917]: E0120 00:49:46.059767 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:46.247443 kubelet[1917]: E0120 00:49:46.247058 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:46.250553 containerd[1582]: time="2026-01-20T00:49:46.250329690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:49:46.653473 kubelet[1917]: E0120 00:49:46.652740 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:47.658655 kubelet[1917]: E0120 00:49:47.657516 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:48.074537 kubelet[1917]: E0120 00:49:48.073730 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:48.565497 kubelet[1917]: E0120 00:49:48.564818 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:48.675144 kubelet[1917]: E0120 00:49:48.673172 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:49.688638 kubelet[1917]: E0120 00:49:49.687385 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:50.068893 kubelet[1917]: E0120 00:49:50.067257 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:50.694928 kubelet[1917]: E0120 00:49:50.694265 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:51.703687 kubelet[1917]: E0120 00:49:51.702957 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:52.078816 kubelet[1917]: E0120 00:49:52.072907 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:52.703720 kubelet[1917]: E0120 00:49:52.703669 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:53.722727 kubelet[1917]: E0120 00:49:53.721732 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:54.076174 kubelet[1917]: E0120 00:49:54.073009 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:54.731482 kubelet[1917]: E0120 00:49:54.730830 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:55.075618 update_engine[1563]: I20260120 00:49:55.072328 1563 update_attempter.cc:509] Updating boot flags... Jan 20 00:49:55.223720 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2374) Jan 20 00:49:55.439181 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2375) Jan 20 00:49:55.563974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2375) Jan 20 00:49:55.732027 kubelet[1917]: E0120 00:49:55.731863 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:55.971752 containerd[1582]: time="2026-01-20T00:49:55.971213630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:55.979839 containerd[1582]: time="2026-01-20T00:49:55.979697310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:49:55.982962 containerd[1582]: time="2026-01-20T00:49:55.982768542Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:55.992389 containerd[1582]: time="2026-01-20T00:49:55.992272589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:49:55.994658 containerd[1582]: time="2026-01-20T00:49:55.994560377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 9.744005926s" Jan 20 00:49:55.994658 containerd[1582]: time="2026-01-20T00:49:55.994628832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:49:56.003856 containerd[1582]: time="2026-01-20T00:49:56.003626582Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:49:56.060236 kubelet[1917]: E0120 00:49:56.060062 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:56.072294 containerd[1582]: time="2026-01-20T00:49:56.072004567Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3\"" Jan 20 00:49:56.072294 containerd[1582]: time="2026-01-20T00:49:56.072998505Z" level=info msg="StartContainer for \"f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3\"" Jan 20 00:49:56.376772 containerd[1582]: time="2026-01-20T00:49:56.374774130Z" level=info msg="StartContainer for \"f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3\" returns successfully" Jan 20 00:49:56.711331 kubelet[1917]: E0120 00:49:56.710988 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:56.735524 kubelet[1917]: E0120 00:49:56.735355 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:57.765170 kubelet[1917]: E0120 00:49:57.755601 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:57.823178 kubelet[1917]: E0120 00:49:57.822035 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:49:58.089823 kubelet[1917]: E0120 00:49:58.085293 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:49:58.766884 kubelet[1917]: E0120 00:49:58.766474 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:59.771272 kubelet[1917]: E0120 00:49:59.771185 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:49:59.782727 kubelet[1917]: I0120 00:49:59.782570 1917 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:49:59.796012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3-rootfs.mount: Deactivated successfully. Jan 20 00:50:00.260734 containerd[1582]: time="2026-01-20T00:50:00.260236773Z" level=info msg="shim disconnected" id=f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3 namespace=k8s.io Jan 20 00:50:00.260734 containerd[1582]: time="2026-01-20T00:50:00.260420590Z" level=warning msg="cleaning up after shim disconnected" id=f7ca5ed27bef6e20c9408e4e0146391afbac8aec57ae4e0c17683232316919a3 namespace=k8s.io Jan 20 00:50:00.260734 containerd[1582]: time="2026-01-20T00:50:00.260439956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:50:00.269364 containerd[1582]: time="2026-01-20T00:50:00.265638366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmzrd,Uid:9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e,Namespace:calico-system,Attempt:0,}" Jan 20 00:50:00.808946 kubelet[1917]: E0120 00:50:00.802399 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:00.930990 kubelet[1917]: E0120 00:50:00.930737 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:00.939866 containerd[1582]: time="2026-01-20T00:50:00.939139285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:50:01.057413 containerd[1582]: time="2026-01-20T00:50:01.057048291Z" level=error msg="Failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:01.077233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f-shm.mount: Deactivated successfully. Jan 20 00:50:01.084165 containerd[1582]: time="2026-01-20T00:50:01.080008750Z" level=error msg="encountered an error cleaning up failed sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:01.084165 containerd[1582]: time="2026-01-20T00:50:01.080291940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmzrd,Uid:9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:01.084346 kubelet[1917]: E0120 00:50:01.082368 1917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:01.084346 kubelet[1917]: E0120 00:50:01.082445 1917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:50:01.084346 kubelet[1917]: E0120 00:50:01.082508 1917 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zmzrd" Jan 20 00:50:01.084456 kubelet[1917]: E0120 00:50:01.082563 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:01.805206 kubelet[1917]: E0120 00:50:01.804273 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:01.913170 kubelet[1917]: I0120 00:50:01.912984 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8sz\" (UniqueName: \"kubernetes.io/projected/a2f71d77-781e-4900-b8a4-919aa1dd894e-kube-api-access-2x8sz\") pod \"nginx-deployment-7fcdb87857-m7864\" (UID: \"a2f71d77-781e-4900-b8a4-919aa1dd894e\") " pod="default/nginx-deployment-7fcdb87857-m7864" Jan 20 00:50:01.953899 kubelet[1917]: I0120 00:50:01.953792 1917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:01.955376 containerd[1582]: time="2026-01-20T00:50:01.955219010Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:50:01.956002 containerd[1582]: time="2026-01-20T00:50:01.955497171Z" level=info msg="Ensure that sandbox a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f in task-service has been cleanup successfully" Jan 20 00:50:02.417389 containerd[1582]: time="2026-01-20T00:50:02.405306565Z" level=error msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" failed" error="failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:02.423849 kubelet[1917]: E0120 00:50:02.420915 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:02.429171 kubelet[1917]: E0120 00:50:02.428975 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f"} Jan 20 00:50:02.429432 kubelet[1917]: E0120 00:50:02.429179 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:02.429432 kubelet[1917]: E0120 00:50:02.429253 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:02.463487 containerd[1582]: time="2026-01-20T00:50:02.463053187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m7864,Uid:a2f71d77-781e-4900-b8a4-919aa1dd894e,Namespace:default,Attempt:0,}" Jan 20 00:50:02.844309 kubelet[1917]: E0120 00:50:02.838280 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:03.859918 kubelet[1917]: E0120 00:50:03.849171 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:04.432867 containerd[1582]: time="2026-01-20T00:50:04.430173728Z" level=error msg="Failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:04.451043 containerd[1582]: time="2026-01-20T00:50:04.447535766Z" level=error msg="encountered an error cleaning up failed sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:04.450389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587-shm.mount: Deactivated successfully. Jan 20 00:50:04.454292 containerd[1582]: time="2026-01-20T00:50:04.451417094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m7864,Uid:a2f71d77-781e-4900-b8a4-919aa1dd894e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:04.456835 kubelet[1917]: E0120 00:50:04.456304 1917 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:04.456835 kubelet[1917]: E0120 00:50:04.456703 1917 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-m7864" Jan 20 00:50:04.456835 kubelet[1917]: E0120 00:50:04.456774 1917 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-m7864" Jan 20 00:50:04.457489 kubelet[1917]: E0120 00:50:04.456997 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-m7864_default(a2f71d77-781e-4900-b8a4-919aa1dd894e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-m7864_default(a2f71d77-781e-4900-b8a4-919aa1dd894e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-m7864" podUID="a2f71d77-781e-4900-b8a4-919aa1dd894e" Jan 20 00:50:04.937165 kubelet[1917]: E0120 00:50:04.904034 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:05.349037 kubelet[1917]: I0120 00:50:05.347982 1917 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:05.352960 containerd[1582]: time="2026-01-20T00:50:05.351186702Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:50:05.352960 containerd[1582]: time="2026-01-20T00:50:05.351961881Z" level=info msg="Ensure that sandbox 2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587 in task-service has been cleanup successfully" Jan 20 00:50:05.451704 containerd[1582]: time="2026-01-20T00:50:05.451398817Z" level=error msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" failed" error="failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:05.456636 kubelet[1917]: E0120 00:50:05.454267 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:05.456636 kubelet[1917]: E0120 00:50:05.454377 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587"} Jan 20 00:50:05.456636 kubelet[1917]: E0120 00:50:05.454429 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:05.456636 kubelet[1917]: E0120 00:50:05.454466 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-m7864" podUID="a2f71d77-781e-4900-b8a4-919aa1dd894e" Jan 20 00:50:05.946450 kubelet[1917]: E0120 00:50:05.943027 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:06.957915 kubelet[1917]: E0120 00:50:06.952537 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:08.292567 kubelet[1917]: E0120 00:50:08.262049 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:08.647402 kubelet[1917]: E0120 00:50:08.636891 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:09.504422 kubelet[1917]: E0120 00:50:09.483820 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:10.508767 kubelet[1917]: E0120 00:50:10.508223 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:11.520013 kubelet[1917]: E0120 00:50:11.519653 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:12.523064 kubelet[1917]: E0120 00:50:12.522641 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:13.523845 kubelet[1917]: E0120 00:50:13.523675 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:14.541507 kubelet[1917]: E0120 00:50:14.541133 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:15.549900 kubelet[1917]: E0120 00:50:15.547132 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:16.067754 containerd[1582]: time="2026-01-20T00:50:16.066001470Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:50:16.377960 containerd[1582]: time="2026-01-20T00:50:16.376816140Z" level=error msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" failed" error="failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:16.384812 kubelet[1917]: E0120 00:50:16.383578 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:16.384812 kubelet[1917]: E0120 00:50:16.384008 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f"} Jan 20 00:50:16.384812 kubelet[1917]: E0120 00:50:16.384193 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:16.384812 kubelet[1917]: E0120 00:50:16.384445 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:16.550944 kubelet[1917]: E0120 00:50:16.550170 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:17.552549 kubelet[1917]: E0120 00:50:17.551984 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:18.563574 kubelet[1917]: E0120 00:50:18.562650 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:19.071560 containerd[1582]: time="2026-01-20T00:50:19.069539061Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:50:19.546606 containerd[1582]: time="2026-01-20T00:50:19.543004124Z" level=error msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" failed" error="failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:19.549395 kubelet[1917]: E0120 00:50:19.548712 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:19.549395 kubelet[1917]: E0120 00:50:19.548796 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587"} Jan 20 00:50:19.549395 kubelet[1917]: E0120 00:50:19.548849 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:19.549395 kubelet[1917]: E0120 00:50:19.548882 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-m7864" podUID="a2f71d77-781e-4900-b8a4-919aa1dd894e" Jan 20 00:50:19.564767 kubelet[1917]: E0120 00:50:19.564476 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:20.565864 kubelet[1917]: E0120 00:50:20.565662 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:21.654806 kubelet[1917]: E0120 00:50:21.653828 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:22.660707 kubelet[1917]: E0120 00:50:22.657895 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:23.661011 kubelet[1917]: E0120 00:50:23.660739 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:24.662135 kubelet[1917]: E0120 00:50:24.661955 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:25.666994 kubelet[1917]: E0120 00:50:25.666582 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:26.707429 kubelet[1917]: E0120 00:50:26.682762 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:27.710391 kubelet[1917]: E0120 00:50:27.709558 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:28.085873 containerd[1582]: time="2026-01-20T00:50:28.080026215Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:50:28.336458 containerd[1582]: time="2026-01-20T00:50:28.334776300Z" level=error msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" failed" error="failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:28.336636 kubelet[1917]: E0120 00:50:28.335773 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:28.336636 kubelet[1917]: E0120 00:50:28.335848 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f"} Jan 20 00:50:28.336636 kubelet[1917]: E0120 00:50:28.335903 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:28.336636 kubelet[1917]: E0120 00:50:28.335943 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:28.559153 kubelet[1917]: E0120 00:50:28.558623 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:28.727725 kubelet[1917]: E0120 00:50:28.726817 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:29.736048 kubelet[1917]: E0120 00:50:29.735658 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:30.741841 kubelet[1917]: E0120 00:50:30.741227 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:31.749646 kubelet[1917]: E0120 00:50:31.749037 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:32.066537 containerd[1582]: time="2026-01-20T00:50:32.065034302Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:50:32.335488 containerd[1582]: time="2026-01-20T00:50:32.330150990Z" level=error msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" failed" error="failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:50:32.341011 kubelet[1917]: E0120 00:50:32.331233 1917 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:32.341011 kubelet[1917]: E0120 00:50:32.340898 1917 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587"} Jan 20 00:50:32.341011 kubelet[1917]: E0120 00:50:32.340970 1917 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:50:32.341762 kubelet[1917]: E0120 00:50:32.341161 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2f71d77-781e-4900-b8a4-919aa1dd894e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-m7864" podUID="a2f71d77-781e-4900-b8a4-919aa1dd894e" Jan 20 00:50:32.757639 kubelet[1917]: E0120 00:50:32.750652 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:33.754824 kubelet[1917]: E0120 00:50:33.752620 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:34.753456 kubelet[1917]: E0120 00:50:34.753324 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:36.057379 kubelet[1917]: E0120 00:50:36.056175 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:36.427338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755256701.mount: Deactivated successfully. Jan 20 00:50:36.527295 containerd[1582]: time="2026-01-20T00:50:36.527033265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:36.534677 containerd[1582]: time="2026-01-20T00:50:36.531597222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:50:36.536677 containerd[1582]: time="2026-01-20T00:50:36.536266216Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:36.548952 containerd[1582]: time="2026-01-20T00:50:36.546671735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:36.548952 containerd[1582]: time="2026-01-20T00:50:36.548334438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 35.609081835s" Jan 20 00:50:36.548952 containerd[1582]: time="2026-01-20T00:50:36.548424847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:50:36.675007 containerd[1582]: time="2026-01-20T00:50:36.674794326Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:50:36.771691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145030717.mount: Deactivated successfully. Jan 20 00:50:36.816485 containerd[1582]: time="2026-01-20T00:50:36.810699970Z" level=info msg="CreateContainer within sandbox \"288727555102af1efe86d47347f169d8a465e963e8a83c64310fd42c8e7bc9d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05e62bf6f71b341be789acbbcd7a38d769172c4f840854c0fc09e2eb6d291985\"" Jan 20 00:50:36.831687 containerd[1582]: time="2026-01-20T00:50:36.831211458Z" level=info msg="StartContainer for \"05e62bf6f71b341be789acbbcd7a38d769172c4f840854c0fc09e2eb6d291985\"" Jan 20 00:50:37.065704 kubelet[1917]: E0120 00:50:37.063716 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:37.501145 containerd[1582]: time="2026-01-20T00:50:37.500992839Z" level=info msg="StartContainer for \"05e62bf6f71b341be789acbbcd7a38d769172c4f840854c0fc09e2eb6d291985\" returns successfully" Jan 20 00:50:38.084234 kubelet[1917]: E0120 00:50:38.083399 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:38.200404 kubelet[1917]: E0120 00:50:38.200302 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:39.308711 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:50:39.309482 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:50:39.309540 kubelet[1917]: E0120 00:50:39.306970 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:39.332023 kubelet[1917]: E0120 00:50:39.331469 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:50:40.308000 kubelet[1917]: E0120 00:50:40.307715 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:41.152920 containerd[1582]: time="2026-01-20T00:50:41.152398458Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:50:41.342989 kubelet[1917]: E0120 00:50:41.342417 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:41.537308 kubelet[1917]: I0120 00:50:41.537066 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lr5cg" podStartSLOduration=10.065759009 podStartE2EDuration="1m12.536984029s" podCreationTimestamp="2026-01-20 00:49:29 +0000 UTC" firstStartedPulling="2026-01-20 00:49:34.081963014 +0000 UTC m=+8.811355733" lastFinishedPulling="2026-01-20 00:50:36.553188033 +0000 UTC m=+71.282580753" observedRunningTime="2026-01-20 00:50:38.372703888 +0000 UTC m=+73.102096617" watchObservedRunningTime="2026-01-20 00:50:41.536984029 +0000 UTC m=+76.266376748" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.530 [INFO][2753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.532 [INFO][2753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" iface="eth0" netns="/var/run/netns/cni-c6c3db58-99e8-73e1-b423-3b261347b799" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.532 [INFO][2753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" iface="eth0" netns="/var/run/netns/cni-c6c3db58-99e8-73e1-b423-3b261347b799" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.534 [INFO][2753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" iface="eth0" netns="/var/run/netns/cni-c6c3db58-99e8-73e1-b423-3b261347b799" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.538 [INFO][2753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.538 [INFO][2753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.635 [INFO][2762] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.636 [INFO][2762] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.638 [INFO][2762] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.682 [WARNING][2762] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.682 [INFO][2762] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.693 [INFO][2762] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:41.718513 containerd[1582]: 2026-01-20 00:50:41.704 [INFO][2753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:50:41.720967 containerd[1582]: time="2026-01-20T00:50:41.720694582Z" level=info msg="TearDown network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" successfully" Jan 20 00:50:41.720967 containerd[1582]: time="2026-01-20T00:50:41.720739487Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" returns successfully" Jan 20 00:50:41.720827 systemd[1]: run-netns-cni\x2dc6c3db58\x2d99e8\x2d73e1\x2db423\x2d3b261347b799.mount: Deactivated successfully. Jan 20 00:50:41.728691 containerd[1582]: time="2026-01-20T00:50:41.727924062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmzrd,Uid:9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e,Namespace:calico-system,Attempt:1,}" Jan 20 00:50:42.639425 kubelet[1917]: E0120 00:50:42.627928 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:43.576522 systemd-networkd[1246]: caliabef537e8e2: Link UP Jan 20 00:50:43.576948 systemd-networkd[1246]: caliabef537e8e2: Gained carrier Jan 20 00:50:43.625766 kernel: bpftool[2927]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:41.977 [INFO][2770] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:42.238 [INFO][2770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.99-k8s-csi--node--driver--zmzrd-eth0 csi-node-driver- calico-system 9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e 1724 0 2026-01-20 00:49:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.99 csi-node-driver-zmzrd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliabef537e8e2 [] [] }} ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:42.238 [INFO][2770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.300 [INFO][2875] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" HandleID="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.302 [INFO][2875] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" HandleID="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.99", "pod":"csi-node-driver-zmzrd", "timestamp":"2026-01-20 00:50:43.300879303 +0000 UTC"}, Hostname:"10.0.0.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.302 [INFO][2875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.303 [INFO][2875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.303 [INFO][2875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.99' Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.342 [INFO][2875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.362 [INFO][2875] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.396 [INFO][2875] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.417 [INFO][2875] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.432 [INFO][2875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.432 [INFO][2875] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.443 [INFO][2875] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823 Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.471 [INFO][2875] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.514 [INFO][2875] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.514 [INFO][2875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" host="10.0.0.99" Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.514 [INFO][2875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:43.636857 containerd[1582]: 2026-01-20 00:50:43.514 [INFO][2875] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" HandleID="k8s-pod-network.4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.641770 kubelet[1917]: E0120 00:50:43.636680 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.527 [INFO][2770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-csi--node--driver--zmzrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e", ResourceVersion:"1724", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"", Pod:"csi-node-driver-zmzrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabef537e8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.528 [INFO][2770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.129/32] ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.528 [INFO][2770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabef537e8e2 ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.568 [INFO][2770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.570 [INFO][2770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-csi--node--driver--zmzrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e", ResourceVersion:"1724", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823", Pod:"csi-node-driver-zmzrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabef537e8e2", MAC:"ee:95:6d:a7:76:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:43.642294 containerd[1582]: 2026-01-20 00:50:43.614 [INFO][2770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823" Namespace="calico-system" Pod="csi-node-driver-zmzrd" WorkloadEndpoint="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:50:43.732413 containerd[1582]: time="2026-01-20T00:50:43.731772451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:50:43.732413 containerd[1582]: time="2026-01-20T00:50:43.732005876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:50:43.732413 containerd[1582]: time="2026-01-20T00:50:43.732033377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:43.732413 containerd[1582]: time="2026-01-20T00:50:43.732293642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:43.895893 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:50:44.011624 containerd[1582]: time="2026-01-20T00:50:44.011467964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmzrd,Uid:9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823\"" Jan 20 00:50:44.016983 containerd[1582]: time="2026-01-20T00:50:44.016718946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:50:44.115651 containerd[1582]: time="2026-01-20T00:50:44.115572108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:44.120115 containerd[1582]: time="2026-01-20T00:50:44.119858925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:50:44.120242 containerd[1582]: time="2026-01-20T00:50:44.120060080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:50:44.121439 kubelet[1917]: E0120 00:50:44.120488 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:50:44.121439 kubelet[1917]: E0120 00:50:44.120569 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:50:44.121570 kubelet[1917]: E0120 00:50:44.120919 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:44.125248 containerd[1582]: time="2026-01-20T00:50:44.125195285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:50:44.202481 containerd[1582]: time="2026-01-20T00:50:44.200193181Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:50:44.207827 containerd[1582]: time="2026-01-20T00:50:44.207635951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:50:44.207827 containerd[1582]: time="2026-01-20T00:50:44.207701562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:50:44.208186 kubelet[1917]: E0120 00:50:44.207999 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:50:44.208186 kubelet[1917]: E0120 00:50:44.208139 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:50:44.208752 kubelet[1917]: E0120 00:50:44.208314 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:50:44.212528 kubelet[1917]: E0120 00:50:44.210609 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:44.724413 kubelet[1917]: E0120 00:50:44.663871 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:44.750551 kubelet[1917]: E0120 00:50:44.750413 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:44.816679 systemd-networkd[1246]: vxlan.calico: Link UP Jan 20 00:50:44.816690 systemd-networkd[1246]: vxlan.calico: Gained carrier Jan 20 00:50:45.485747 systemd-networkd[1246]: caliabef537e8e2: Gained IPv6LL Jan 20 00:50:45.673341 kubelet[1917]: E0120 00:50:45.672770 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:45.838807 kubelet[1917]: E0120 00:50:45.838037 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:50:46.100535 containerd[1582]: time="2026-01-20T00:50:46.098959778Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.412 [INFO][3057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.419 [INFO][3057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" iface="eth0" netns="/var/run/netns/cni-0a3a3c58-cae9-6a88-05bf-8c86aa048b59" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.419 [INFO][3057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" iface="eth0" netns="/var/run/netns/cni-0a3a3c58-cae9-6a88-05bf-8c86aa048b59" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.420 [INFO][3057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" iface="eth0" netns="/var/run/netns/cni-0a3a3c58-cae9-6a88-05bf-8c86aa048b59" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.420 [INFO][3057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.420 [INFO][3057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.541 [INFO][3071] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.545 [INFO][3071] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.546 [INFO][3071] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.570 [WARNING][3071] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.570 [INFO][3071] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.580 [INFO][3071] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:46.604456 containerd[1582]: 2026-01-20 00:50:46.588 [INFO][3057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:50:46.615901 containerd[1582]: time="2026-01-20T00:50:46.615836313Z" level=info msg="TearDown network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" successfully" Jan 20 00:50:46.615901 containerd[1582]: time="2026-01-20T00:50:46.615895074Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" returns successfully" Jan 20 00:50:46.618854 containerd[1582]: time="2026-01-20T00:50:46.618046681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m7864,Uid:a2f71d77-781e-4900-b8a4-919aa1dd894e,Namespace:default,Attempt:1,}" Jan 20 00:50:46.620803 systemd[1]: run-netns-cni\x2d0a3a3c58\x2dcae9\x2d6a88\x2d05bf\x2d8c86aa048b59.mount: Deactivated successfully. Jan 20 00:50:46.677966 kubelet[1917]: E0120 00:50:46.677865 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:46.821769 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Jan 20 00:50:47.277580 systemd-networkd[1246]: cali9c560e5addf: Link UP Jan 20 00:50:47.281786 systemd-networkd[1246]: cali9c560e5addf: Gained carrier Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:46.856 [INFO][3079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0 nginx-deployment-7fcdb87857- default a2f71d77-781e-4900-b8a4-919aa1dd894e 1760 0 2026-01-20 00:50:01 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.99 nginx-deployment-7fcdb87857-m7864 eth0 default [] [] [kns.default ksa.default.default] cali9c560e5addf [] [] }} ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:46.856 [INFO][3079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.034 [INFO][3094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" HandleID="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.034 [INFO][3094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" HandleID="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a6ec0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.99", "pod":"nginx-deployment-7fcdb87857-m7864", "timestamp":"2026-01-20 00:50:47.034530647 +0000 UTC"}, Hostname:"10.0.0.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.034 [INFO][3094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.034 [INFO][3094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.035 [INFO][3094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.99' Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.073 [INFO][3094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.110 [INFO][3094] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.139 [INFO][3094] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.152 [INFO][3094] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.167 [INFO][3094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.167 [INFO][3094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.177 [INFO][3094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8 Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.205 [INFO][3094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.241 [INFO][3094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.244 [INFO][3094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" host="10.0.0.99" Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.245 [INFO][3094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:50:47.344589 containerd[1582]: 2026-01-20 00:50:47.245 [INFO][3094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" HandleID="k8s-pod-network.c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.258 [INFO][3079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a2f71d77-781e-4900-b8a4-919aa1dd894e", ResourceVersion:"1760", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-m7864", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c560e5addf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.258 [INFO][3079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.130/32] ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.258 [INFO][3079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c560e5addf ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.285 [INFO][3079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.285 [INFO][3079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a2f71d77-781e-4900-b8a4-919aa1dd894e", ResourceVersion:"1760", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8", Pod:"nginx-deployment-7fcdb87857-m7864", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c560e5addf", MAC:"7a:3b:0b:37:13:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:50:47.346181 containerd[1582]: 2026-01-20 00:50:47.329 [INFO][3079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8" Namespace="default" Pod="nginx-deployment-7fcdb87857-m7864" WorkloadEndpoint="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:50:47.469220 containerd[1582]: time="2026-01-20T00:50:47.468296220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:50:47.469220 containerd[1582]: time="2026-01-20T00:50:47.468406761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:50:47.469220 containerd[1582]: time="2026-01-20T00:50:47.468458029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:47.469220 containerd[1582]: time="2026-01-20T00:50:47.468670993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:50:47.569922 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:50:47.690633 kubelet[1917]: E0120 00:50:47.690556 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:47.752595 containerd[1582]: time="2026-01-20T00:50:47.752343054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m7864,Uid:a2f71d77-781e-4900-b8a4-919aa1dd894e,Namespace:default,Attempt:1,} returns sandbox id \"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8\"" Jan 20 00:50:47.767556 containerd[1582]: time="2026-01-20T00:50:47.765964302Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:50:48.553869 kubelet[1917]: E0120 00:50:48.551676 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:48.691799 kubelet[1917]: E0120 00:50:48.691725 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:49.130008 systemd-networkd[1246]: cali9c560e5addf: Gained IPv6LL Jan 20 00:50:49.693770 kubelet[1917]: E0120 00:50:49.693155 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:50.694137 kubelet[1917]: E0120 00:50:50.693896 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:51.694226 kubelet[1917]: E0120 00:50:51.694139 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:52.705541 kubelet[1917]: E0120 00:50:52.698529 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:53.700173 kubelet[1917]: E0120 00:50:53.699490 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:53.868891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567859197.mount: Deactivated successfully. Jan 20 00:50:54.700840 kubelet[1917]: E0120 00:50:54.700753 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:55.701952 kubelet[1917]: E0120 00:50:55.701786 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:56.702990 kubelet[1917]: E0120 00:50:56.702914 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:57.703839 kubelet[1917]: E0120 00:50:57.703199 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:57.777560 containerd[1582]: time="2026-01-20T00:50:57.777417072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:57.781117 containerd[1582]: time="2026-01-20T00:50:57.780966678Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 20 00:50:57.783055 containerd[1582]: time="2026-01-20T00:50:57.781853865Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:57.794604 containerd[1582]: time="2026-01-20T00:50:57.794400253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:50:57.799378 containerd[1582]: time="2026-01-20T00:50:57.799288961Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 10.033243164s" Jan 20 00:50:57.799378 containerd[1582]: time="2026-01-20T00:50:57.799372569Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:50:57.812054 containerd[1582]: time="2026-01-20T00:50:57.811911820Z" level=info msg="CreateContainer within sandbox \"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 20 00:50:57.852641 containerd[1582]: time="2026-01-20T00:50:57.852543866Z" level=info msg="CreateContainer within sandbox \"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9fa69468dbcf848917234625d08b5774f556117a9bd11094d4125fccf1422659\"" Jan 20 00:50:57.854576 containerd[1582]: time="2026-01-20T00:50:57.853430054Z" level=info msg="StartContainer for \"9fa69468dbcf848917234625d08b5774f556117a9bd11094d4125fccf1422659\"" Jan 20 00:50:58.285930 containerd[1582]: time="2026-01-20T00:50:58.285842196Z" level=info msg="StartContainer for \"9fa69468dbcf848917234625d08b5774f556117a9bd11094d4125fccf1422659\" returns successfully" Jan 20 00:50:58.704194 kubelet[1917]: E0120 00:50:58.703920 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:50:59.706631 kubelet[1917]: E0120 00:50:59.704894 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:00.708163 kubelet[1917]: E0120 00:51:00.706915 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:01.088057 containerd[1582]: time="2026-01-20T00:51:01.085303589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:51:01.200556 containerd[1582]: time="2026-01-20T00:51:01.200287312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:01.207618 containerd[1582]: time="2026-01-20T00:51:01.207220044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:51:01.207618 containerd[1582]: time="2026-01-20T00:51:01.207394251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:51:01.210793 kubelet[1917]: E0120 00:51:01.207829 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:51:01.210793 kubelet[1917]: E0120 00:51:01.207944 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:51:01.210793 kubelet[1917]: E0120 00:51:01.208301 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:01.212751 containerd[1582]: time="2026-01-20T00:51:01.212624306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:51:01.300877 containerd[1582]: time="2026-01-20T00:51:01.300777596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:01.306862 containerd[1582]: time="2026-01-20T00:51:01.306661345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:51:01.307184 containerd[1582]: time="2026-01-20T00:51:01.306881311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:51:01.309993 kubelet[1917]: E0120 00:51:01.307682 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:51:01.309993 kubelet[1917]: E0120 00:51:01.307801 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:51:01.310287 kubelet[1917]: E0120 00:51:01.309791 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:01.312165 kubelet[1917]: E0120 00:51:01.311467 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:51:01.750353 kubelet[1917]: E0120 00:51:01.735530 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:02.750894 kubelet[1917]: E0120 00:51:02.750558 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:03.751931 kubelet[1917]: E0120 00:51:03.751782 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:04.753246 kubelet[1917]: E0120 00:51:04.752661 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:05.903047 kubelet[1917]: E0120 00:51:05.815284 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:05.919917 kubelet[1917]: I0120 00:51:05.919710 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-m7864" podStartSLOduration=54.877818816 podStartE2EDuration="1m4.919561116s" podCreationTimestamp="2026-01-20 00:50:01 +0000 UTC" firstStartedPulling="2026-01-20 00:50:47.76526413 +0000 UTC m=+82.494656850" lastFinishedPulling="2026-01-20 00:50:57.807006431 +0000 UTC m=+92.536399150" observedRunningTime="2026-01-20 00:50:59.035224057 +0000 UTC m=+93.764616776" watchObservedRunningTime="2026-01-20 00:51:05.919561116 +0000 UTC m=+100.648953836" Jan 20 00:51:06.016400 kubelet[1917]: I0120 00:51:06.014918 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pklc\" (UniqueName: \"kubernetes.io/projected/ea958895-e3d7-4310-804a-da64a232c3cc-kube-api-access-5pklc\") pod \"nfs-server-provisioner-0\" (UID: \"ea958895-e3d7-4310-804a-da64a232c3cc\") " pod="default/nfs-server-provisioner-0" Jan 20 00:51:06.016400 kubelet[1917]: I0120 00:51:06.015645 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ea958895-e3d7-4310-804a-da64a232c3cc-data\") pod \"nfs-server-provisioner-0\" (UID: \"ea958895-e3d7-4310-804a-da64a232c3cc\") " pod="default/nfs-server-provisioner-0" Jan 20 00:51:06.837224 containerd[1582]: time="2026-01-20T00:51:06.836981362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea958895-e3d7-4310-804a-da64a232c3cc,Namespace:default,Attempt:0,}" Jan 20 00:51:06.906232 kubelet[1917]: E0120 00:51:06.904451 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:07.781131 systemd-networkd[1246]: cali60e51b789ff: Link UP Jan 20 00:51:07.783740 systemd-networkd[1246]: cali60e51b789ff: Gained carrier Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.447 [INFO][3262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.99-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ea958895-e3d7-4310-804a-da64a232c3cc 1859 0 2026-01-20 00:51:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.99 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.449 [INFO][3262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.595 [INFO][3277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" HandleID="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Workload="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.595 [INFO][3277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" HandleID="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Workload="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397f00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.99", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-20 00:51:07.595394415 +0000 UTC"}, Hostname:"10.0.0.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.595 [INFO][3277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.595 [INFO][3277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.596 [INFO][3277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.99' Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.636 [INFO][3277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.663 [INFO][3277] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.694 [INFO][3277] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.705 [INFO][3277] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.713 [INFO][3277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.713 [INFO][3277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.719 [INFO][3277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0 Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.734 [INFO][3277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.757 [INFO][3277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.757 [INFO][3277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" host="10.0.0.99" Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.757 [INFO][3277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:07.854233 containerd[1582]: 2026-01-20 00:51:07.757 [INFO][3277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" HandleID="k8s-pod-network.794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Workload="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.857052 containerd[1582]: 2026-01-20 00:51:07.766 [INFO][3262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ea958895-e3d7-4310-804a-da64a232c3cc", ResourceVersion:"1859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 51, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:07.857052 containerd[1582]: 2026-01-20 00:51:07.766 [INFO][3262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.131/32] ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.857052 containerd[1582]: 2026-01-20 00:51:07.769 [INFO][3262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.857052 containerd[1582]: 2026-01-20 00:51:07.784 [INFO][3262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.863567 containerd[1582]: 2026-01-20 00:51:07.787 [INFO][3262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ea958895-e3d7-4310-804a-da64a232c3cc", ResourceVersion:"1859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 51, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"16:2a:a8:7b:70:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:07.863567 containerd[1582]: 2026-01-20 00:51:07.828 [INFO][3262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.99-k8s-nfs--server--provisioner--0-eth0" Jan 20 00:51:07.906058 kubelet[1917]: E0120 00:51:07.905772 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:07.969344 containerd[1582]: time="2026-01-20T00:51:07.968854380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:51:07.969344 containerd[1582]: time="2026-01-20T00:51:07.968968927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:51:07.969344 containerd[1582]: time="2026-01-20T00:51:07.969013952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:07.972624 containerd[1582]: time="2026-01-20T00:51:07.970826965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:08.100638 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:51:08.240700 containerd[1582]: time="2026-01-20T00:51:08.237343769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea958895-e3d7-4310-804a-da64a232c3cc,Namespace:default,Attempt:0,} returns sandbox id \"794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0\"" Jan 20 00:51:08.249304 containerd[1582]: time="2026-01-20T00:51:08.246818049Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 20 00:51:08.555546 kubelet[1917]: E0120 00:51:08.550875 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:08.985248 kubelet[1917]: E0120 00:51:08.984884 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:09.226373 systemd-networkd[1246]: cali60e51b789ff: Gained IPv6LL Jan 20 00:51:09.601878 systemd[1]: run-containerd-runc-k8s.io-05e62bf6f71b341be789acbbcd7a38d769172c4f840854c0fc09e2eb6d291985-runc.oQPQWT.mount: Deactivated successfully. Jan 20 00:51:09.955842 kubelet[1917]: E0120 00:51:09.955758 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:09.987807 kubelet[1917]: E0120 00:51:09.987480 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:11.000722 kubelet[1917]: E0120 00:51:10.998697 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:11.262651 kubelet[1917]: E0120 00:51:11.260123 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:51:12.035732 kubelet[1917]: E0120 00:51:12.035357 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:12.217417 kubelet[1917]: E0120 00:51:12.216817 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:51:13.136830 kubelet[1917]: E0120 00:51:13.124911 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:14.133060 kubelet[1917]: E0120 00:51:14.132567 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:15.133976 kubelet[1917]: E0120 00:51:15.133191 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:15.210600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786981850.mount: Deactivated successfully. Jan 20 00:51:16.134004 kubelet[1917]: E0120 00:51:16.133953 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:17.135728 kubelet[1917]: E0120 00:51:17.135558 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:18.145883 kubelet[1917]: E0120 00:51:18.141757 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:19.146203 kubelet[1917]: E0120 00:51:19.145819 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:20.147146 kubelet[1917]: E0120 00:51:20.146937 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:21.200552 kubelet[1917]: E0120 00:51:21.150271 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:23.727821 kubelet[1917]: E0120 00:51:23.720188 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:25.225338 kubelet[1917]: E0120 00:51:25.219756 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:26.325280 kubelet[1917]: E0120 00:51:26.255606 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:27.775394 kubelet[1917]: E0120 00:51:27.767742 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:28.504529 kubelet[1917]: E0120 00:51:28.503771 1917 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.349s" Jan 20 00:51:28.726166 kubelet[1917]: E0120 00:51:28.721451 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:28.943642 kubelet[1917]: E0120 00:51:28.919461 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:29.959952 kubelet[1917]: E0120 00:51:29.952870 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:31.286502 containerd[1582]: time="2026-01-20T00:51:31.285650147Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:51:31.375489 kubelet[1917]: E0120 00:51:31.366622 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:32.730402 kubelet[1917]: E0120 00:51:32.725868 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:33.743172 kubelet[1917]: E0120 00:51:33.737493 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:34.157228 kubelet[1917]: E0120 00:51:34.155499 1917 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.99?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 00:51:34.173954 kubelet[1917]: E0120 00:51:34.172306 1917 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.651s" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.489 [WARNING][3400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a2f71d77-781e-4900-b8a4-919aa1dd894e", ResourceVersion:"1817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8", Pod:"nginx-deployment-7fcdb87857-m7864", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c560e5addf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.491 [INFO][3400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.491 [INFO][3400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" iface="eth0" netns="" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.491 [INFO][3400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.491 [INFO][3400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.610 [INFO][3408] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.611 [INFO][3408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.611 [INFO][3408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.639 [WARNING][3408] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.639 [INFO][3408] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.654 [INFO][3408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:34.672485 containerd[1582]: 2026-01-20 00:51:34.660 [INFO][3400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:34.672485 containerd[1582]: time="2026-01-20T00:51:34.671910146Z" level=info msg="TearDown network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" successfully" Jan 20 00:51:34.677409 containerd[1582]: time="2026-01-20T00:51:34.675325230Z" level=info msg="StopPodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" returns successfully" Jan 20 00:51:34.682029 containerd[1582]: time="2026-01-20T00:51:34.681461034Z" level=info msg="RemovePodSandbox for \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:51:34.690175 containerd[1582]: time="2026-01-20T00:51:34.681571171Z" level=info msg="Forcibly stopping sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\"" Jan 20 00:51:34.758131 kubelet[1917]: E0120 00:51:34.756425 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.871 [WARNING][3425] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a2f71d77-781e-4900-b8a4-919aa1dd894e", ResourceVersion:"1817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"c6d1b5c9608828288a7f535aaa449dd1bc1fe09ef08848cf70ad48ccb9ad6db8", Pod:"nginx-deployment-7fcdb87857-m7864", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9c560e5addf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.873 [INFO][3425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.873 [INFO][3425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" iface="eth0" netns="" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.873 [INFO][3425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.874 [INFO][3425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.964 [INFO][3433] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.965 [INFO][3433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:34.965 [INFO][3433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:35.010 [WARNING][3433] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:35.010 [INFO][3433] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" HandleID="k8s-pod-network.2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Workload="10.0.0.99-k8s-nginx--deployment--7fcdb87857--m7864-eth0" Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:35.021 [INFO][3433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:35.033203 containerd[1582]: 2026-01-20 00:51:35.026 [INFO][3425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587" Jan 20 00:51:35.033203 containerd[1582]: time="2026-01-20T00:51:35.032340900Z" level=info msg="TearDown network for sandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" successfully" Jan 20 00:51:35.171955 containerd[1582]: time="2026-01-20T00:51:35.171791005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:51:35.171955 containerd[1582]: time="2026-01-20T00:51:35.171932350Z" level=info msg="RemovePodSandbox \"2dc2fcd4544541536427a8afd68f307808585d8781a8a3bf024ef62dbbd05587\" returns successfully" Jan 20 00:51:35.173189 containerd[1582]: time="2026-01-20T00:51:35.172748656Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.318 [WARNING][3451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-csi--node--driver--zmzrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e", ResourceVersion:"1940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823", Pod:"csi-node-driver-zmzrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabef537e8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.318 [INFO][3451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.318 [INFO][3451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" iface="eth0" netns="" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.318 [INFO][3451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.318 [INFO][3451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.398 [INFO][3459] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.398 [INFO][3459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.398 [INFO][3459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.417 [WARNING][3459] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.417 [INFO][3459] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.429 [INFO][3459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:35.443497 containerd[1582]: 2026-01-20 00:51:35.435 [INFO][3451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.447300 containerd[1582]: time="2026-01-20T00:51:35.443523858Z" level=info msg="TearDown network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" successfully" Jan 20 00:51:35.447300 containerd[1582]: time="2026-01-20T00:51:35.443560347Z" level=info msg="StopPodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" returns successfully" Jan 20 00:51:35.447300 containerd[1582]: time="2026-01-20T00:51:35.446481633Z" level=info msg="RemovePodSandbox for \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:51:35.447300 containerd[1582]: time="2026-01-20T00:51:35.446520576Z" level=info msg="Forcibly stopping sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\"" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.609 [WARNING][3475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-csi--node--driver--zmzrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e", ResourceVersion:"1940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"4c53bc9b650f1bbfba932f02f9f7f244f592802dee534ac3413f0f231392d823", Pod:"csi-node-driver-zmzrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabef537e8e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.609 [INFO][3475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.609 [INFO][3475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" iface="eth0" netns="" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.609 [INFO][3475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.609 [INFO][3475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.660 [INFO][3483] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.661 [INFO][3483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.661 [INFO][3483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.682 [WARNING][3483] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.682 [INFO][3483] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" HandleID="k8s-pod-network.a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Workload="10.0.0.99-k8s-csi--node--driver--zmzrd-eth0" Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.691 [INFO][3483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:35.702010 containerd[1582]: 2026-01-20 00:51:35.697 [INFO][3475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f" Jan 20 00:51:35.702010 containerd[1582]: time="2026-01-20T00:51:35.701897310Z" level=info msg="TearDown network for sandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" successfully" Jan 20 00:51:35.717189 containerd[1582]: time="2026-01-20T00:51:35.717028746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:51:35.717189 containerd[1582]: time="2026-01-20T00:51:35.717186313Z" level=info msg="RemovePodSandbox \"a4bd86c09ab99fa3dc57bf68f125c66a5fa039911b69981f12261a6b9d9bca0f\" returns successfully" Jan 20 00:51:35.758530 kubelet[1917]: E0120 00:51:35.758472 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:36.762214 kubelet[1917]: E0120 00:51:36.761433 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:37.369908 containerd[1582]: time="2026-01-20T00:51:37.369744519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:51:37.372971 containerd[1582]: time="2026-01-20T00:51:37.372842910Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 20 00:51:37.375791 containerd[1582]: time="2026-01-20T00:51:37.374776909Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:51:37.382894 containerd[1582]: time="2026-01-20T00:51:37.382716988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:51:37.385791 containerd[1582]: time="2026-01-20T00:51:37.384607520Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 29.13774061s" Jan 20 00:51:37.385791 containerd[1582]: time="2026-01-20T00:51:37.384733397Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 20 00:51:37.389474 containerd[1582]: time="2026-01-20T00:51:37.388965610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:51:37.395757 containerd[1582]: time="2026-01-20T00:51:37.395310356Z" level=info msg="CreateContainer within sandbox \"794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 20 00:51:37.442902 containerd[1582]: time="2026-01-20T00:51:37.442764246Z" level=info msg="CreateContainer within sandbox \"794b53d16d35af1ceed90ba19ac932bb76b85f6cda1244502b13b42880bb3bd0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"dc7fc03b7485375c13b1ba438a294bd5b40a0a7cbb47c6fb7aeb6d2b1841cf15\"" Jan 20 00:51:37.443819 containerd[1582]: time="2026-01-20T00:51:37.443746481Z" level=info msg="StartContainer for \"dc7fc03b7485375c13b1ba438a294bd5b40a0a7cbb47c6fb7aeb6d2b1841cf15\"" Jan 20 00:51:37.475590 containerd[1582]: time="2026-01-20T00:51:37.475365800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:37.478764 containerd[1582]: time="2026-01-20T00:51:37.477770457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:51:37.478764 containerd[1582]: time="2026-01-20T00:51:37.477863349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:51:37.478941 kubelet[1917]: E0120 00:51:37.478227 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:51:37.478941 kubelet[1917]: E0120 00:51:37.478321 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:51:37.478941 kubelet[1917]: E0120 00:51:37.478601 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:37.490765 containerd[1582]: time="2026-01-20T00:51:37.486271863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:51:37.568237 containerd[1582]: time="2026-01-20T00:51:37.568187775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:51:37.572155 containerd[1582]: time="2026-01-20T00:51:37.572007856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:51:37.572344 containerd[1582]: time="2026-01-20T00:51:37.572296740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:51:37.574471 kubelet[1917]: E0120 00:51:37.574354 1917 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:51:37.574471 kubelet[1917]: E0120 00:51:37.574429 1917 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:51:37.576789 kubelet[1917]: E0120 00:51:37.574592 1917 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pqpzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmzrd_calico-system(9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:51:37.576789 kubelet[1917]: E0120 00:51:37.576008 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:51:37.606682 containerd[1582]: time="2026-01-20T00:51:37.606412236Z" level=info msg="StartContainer for \"dc7fc03b7485375c13b1ba438a294bd5b40a0a7cbb47c6fb7aeb6d2b1841cf15\" returns successfully" Jan 20 00:51:37.767401 kubelet[1917]: E0120 00:51:37.767219 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:38.768573 kubelet[1917]: E0120 00:51:38.767480 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:39.768893 kubelet[1917]: E0120 00:51:39.768702 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:40.769408 kubelet[1917]: E0120 00:51:40.769287 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:41.782596 kubelet[1917]: E0120 00:51:41.780864 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:42.784302 kubelet[1917]: E0120 00:51:42.783585 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:43.787291 kubelet[1917]: E0120 00:51:43.785679 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:44.787998 kubelet[1917]: E0120 00:51:44.787500 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:45.307509 kubelet[1917]: I0120 00:51:45.305009 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.160920053 podStartE2EDuration="40.304928209s" podCreationTimestamp="2026-01-20 00:51:05 +0000 UTC" firstStartedPulling="2026-01-20 00:51:08.244311921 +0000 UTC m=+102.973704640" lastFinishedPulling="2026-01-20 00:51:37.388320067 +0000 UTC m=+132.117712796" observedRunningTime="2026-01-20 00:51:38.257466804 +0000 UTC m=+132.986859533" watchObservedRunningTime="2026-01-20 00:51:45.304928209 +0000 UTC m=+140.034320959" Jan 20 00:51:45.462469 kubelet[1917]: I0120 00:51:45.462340 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d707f87e-8f0e-4ea6-954e-dfda315c5500\" (UniqueName: \"kubernetes.io/nfs/7074ec98-4ec6-483b-8367-ab867862b25a-pvc-d707f87e-8f0e-4ea6-954e-dfda315c5500\") pod \"test-pod-1\" (UID: \"7074ec98-4ec6-483b-8367-ab867862b25a\") " pod="default/test-pod-1" Jan 20 00:51:45.462469 kubelet[1917]: I0120 00:51:45.462452 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzzs5\" (UniqueName: \"kubernetes.io/projected/7074ec98-4ec6-483b-8367-ab867862b25a-kube-api-access-dzzs5\") pod \"test-pod-1\" (UID: \"7074ec98-4ec6-483b-8367-ab867862b25a\") " pod="default/test-pod-1" Jan 20 00:51:45.726280 kernel: FS-Cache: Loaded Jan 20 00:51:45.793889 kubelet[1917]: E0120 00:51:45.790905 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:46.033311 kernel: RPC: Registered named UNIX socket transport module. Jan 20 00:51:46.033496 kernel: RPC: Registered udp transport module. Jan 20 00:51:46.033549 kernel: RPC: Registered tcp transport module. Jan 20 00:51:46.041617 kernel: RPC: Registered tcp-with-tls transport module. Jan 20 00:51:46.047582 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 20 00:51:46.671300 kernel: NFS: Registering the id_resolver key type Jan 20 00:51:46.671475 kernel: Key type id_resolver registered Jan 20 00:51:46.671527 kernel: Key type id_legacy registered Jan 20 00:51:46.801004 kubelet[1917]: E0120 00:51:46.800886 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:46.809981 nfsidmap[3626]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:51:46.834227 nfsidmap[3629]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 20 00:51:47.134699 containerd[1582]: time="2026-01-20T00:51:47.133836619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7074ec98-4ec6-483b-8367-ab867862b25a,Namespace:default,Attempt:0,}" Jan 20 00:51:47.762645 systemd-networkd[1246]: cali5ec59c6bf6e: Link UP Jan 20 00:51:47.772913 systemd-networkd[1246]: cali5ec59c6bf6e: Gained carrier Jan 20 00:51:47.820763 kubelet[1917]: E0120 00:51:47.817436 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.392 [INFO][3632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.99-k8s-test--pod--1-eth0 default 7074ec98-4ec6-483b-8367-ab867862b25a 2015 0 2026-01-20 00:51:07 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.99 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.392 [INFO][3632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.559 [INFO][3647] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" HandleID="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Workload="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.559 [INFO][3647] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" HandleID="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Workload="10.0.0.99-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000420f60), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.99", "pod":"test-pod-1", "timestamp":"2026-01-20 00:51:47.559378879 +0000 UTC"}, Hostname:"10.0.0.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.559 [INFO][3647] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.559 [INFO][3647] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.560 [INFO][3647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.99' Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.604 [INFO][3647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.624 [INFO][3647] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.643 [INFO][3647] ipam/ipam.go 511: Trying affinity for 192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.655 [INFO][3647] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.662 [INFO][3647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.663 [INFO][3647] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.674 [INFO][3647] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.709 [INFO][3647] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.734 [INFO][3647] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.734 [INFO][3647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" host="10.0.0.99" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.734 [INFO][3647] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.735 [INFO][3647] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" HandleID="k8s-pod-network.706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Workload="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.890229 containerd[1582]: 2026-01-20 00:51:47.746 [INFO][3632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7074ec98-4ec6-483b-8367-ab867862b25a", ResourceVersion:"2015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:47.893051 containerd[1582]: 2026-01-20 00:51:47.746 [INFO][3632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.132/32] ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.893051 containerd[1582]: 2026-01-20 00:51:47.746 [INFO][3632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.893051 containerd[1582]: 2026-01-20 00:51:47.772 [INFO][3632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.893051 containerd[1582]: 2026-01-20 00:51:47.778 [INFO][3632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.99-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7074ec98-4ec6-483b-8367-ab867862b25a", ResourceVersion:"2015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.99", ContainerID:"706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"56:b2:14:59:71:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:51:47.893051 containerd[1582]: 2026-01-20 00:51:47.871 [INFO][3632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.99-k8s-test--pod--1-eth0" Jan 20 00:51:47.986969 containerd[1582]: time="2026-01-20T00:51:47.984621709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:51:47.986969 containerd[1582]: time="2026-01-20T00:51:47.984896705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:51:47.986969 containerd[1582]: time="2026-01-20T00:51:47.984921501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:47.986969 containerd[1582]: time="2026-01-20T00:51:47.985150622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:51:48.085657 systemd-resolved[1474]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:51:48.183662 containerd[1582]: time="2026-01-20T00:51:48.183520397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7074ec98-4ec6-483b-8367-ab867862b25a,Namespace:default,Attempt:0,} returns sandbox id \"706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad\"" Jan 20 00:51:48.188365 containerd[1582]: time="2026-01-20T00:51:48.188029494Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 20 00:51:48.396820 containerd[1582]: time="2026-01-20T00:51:48.395623878Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:51:48.396820 containerd[1582]: time="2026-01-20T00:51:48.396407699Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 20 00:51:48.478324 containerd[1582]: time="2026-01-20T00:51:48.478117771Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 289.917517ms" Jan 20 00:51:48.481370 containerd[1582]: time="2026-01-20T00:51:48.479708213Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 20 00:51:48.490384 containerd[1582]: time="2026-01-20T00:51:48.490274762Z" level=info msg="CreateContainer within sandbox \"706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 20 00:51:48.564501 kubelet[1917]: E0120 00:51:48.564145 1917 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:48.596526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277053576.mount: Deactivated successfully. Jan 20 00:51:48.602694 containerd[1582]: time="2026-01-20T00:51:48.602512021Z" level=info msg="CreateContainer within sandbox \"706ece3bc845a73737aaf384c9ef8f97a2d09f3b6361ffaae44e490860f020ad\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ef945612006bbc501eb1f153f0050b027e554dcb11ff2f43119c51092e95e215\"" Jan 20 00:51:48.611207 containerd[1582]: time="2026-01-20T00:51:48.608449157Z" level=info msg="StartContainer for \"ef945612006bbc501eb1f153f0050b027e554dcb11ff2f43119c51092e95e215\"" Jan 20 00:51:48.845639 kubelet[1917]: E0120 00:51:48.831473 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:49.012267 containerd[1582]: time="2026-01-20T00:51:49.010329923Z" level=info msg="StartContainer for \"ef945612006bbc501eb1f153f0050b027e554dcb11ff2f43119c51092e95e215\" returns successfully" Jan 20 00:51:49.222192 systemd-networkd[1246]: cali5ec59c6bf6e: Gained IPv6LL Jan 20 00:51:49.855266 kubelet[1917]: E0120 00:51:49.854325 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:50.074524 kubelet[1917]: E0120 00:51:50.074289 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmzrd" podUID="9c9c61fc-bda8-4f9e-a70d-5bcfbaefa16e" Jan 20 00:51:50.117580 kubelet[1917]: I0120 00:51:50.117166 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=42.819994517 podStartE2EDuration="43.117141694s" podCreationTimestamp="2026-01-20 00:51:07 +0000 UTC" firstStartedPulling="2026-01-20 00:51:48.187165202 +0000 UTC m=+142.916557931" lastFinishedPulling="2026-01-20 00:51:48.484312379 +0000 UTC m=+143.213705108" observedRunningTime="2026-01-20 00:51:49.554679425 +0000 UTC m=+144.284072165" watchObservedRunningTime="2026-01-20 00:51:50.117141694 +0000 UTC m=+144.846534413" Jan 20 00:51:50.863203 kubelet[1917]: E0120 00:51:50.860750 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:51.865498 kubelet[1917]: E0120 00:51:51.862809 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:52.868672 kubelet[1917]: E0120 00:51:52.866975 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 20 00:51:53.868968 kubelet[1917]: E0120 00:51:53.867421 1917 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"