Jan 17 00:39:18.862969 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:39:18.863000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:39:18.863019 kernel: BIOS-provided physical RAM map: Jan 17 00:39:18.863030 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:39:18.863039 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:39:18.863049 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:39:18.863059 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:39:18.863069 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:39:18.863079 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:39:18.863088 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:39:18.863105 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:39:18.863114 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:39:18.863122 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:39:18.863130 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:39:18.863142 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:39:18.863153 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:39:18.863171 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:39:18.863181 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:39:18.863189 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:39:18.863199 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:39:18.863210 kernel: NX (Execute Disable) protection: active Jan 17 00:39:18.863219 kernel: APIC: Static calls initialized Jan 17 00:39:18.863227 kernel: efi: EFI v2.7 by EDK II Jan 17 00:39:18.863237 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:39:18.863247 kernel: SMBIOS 2.8 present. Jan 17 00:39:18.863258 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:39:18.863268 kernel: Hypervisor detected: KVM Jan 17 00:39:18.863282 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:39:18.863291 kernel: kvm-clock: using sched offset of 12738012032 cycles Jan 17 00:39:18.863301 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:39:18.863310 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:39:18.863320 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:39:18.863329 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:39:18.863339 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:39:18.863348 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:39:18.863396 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:39:18.863410 kernel: Using GB pages for direct mapping Jan 17 00:39:18.863419 kernel: Secure boot disabled Jan 17 00:39:18.863428 kernel: ACPI: Early table checksum verification disabled Jan 17 00:39:18.863438 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:39:18.868392 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:39:18.868412 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868423 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868440 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:39:18.868485 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868498 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868510 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868521 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:39:18.868532 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:39:18.868543 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:39:18.868559 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:39:18.868571 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:39:18.868582 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:39:18.868592 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:39:18.868603 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:39:18.868614 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:39:18.868624 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:39:18.868635 kernel: No NUMA configuration found Jan 17 00:39:18.868646 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:39:18.868661 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:39:18.868672 kernel: Zone ranges: Jan 17 00:39:18.868683 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:39:18.868694 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:39:18.868705 kernel: Normal empty Jan 17 00:39:18.868716 kernel: Movable zone start for each node Jan 17 00:39:18.868726 kernel: Early memory node ranges Jan 17 00:39:18.868737 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:39:18.868748 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:39:18.868762 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:39:18.868773 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:39:18.868783 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:39:18.868794 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:39:18.868918 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:39:18.868931 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:39:18.868942 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:39:18.868952 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:39:18.868961 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:39:18.868973 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:39:18.868989 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:39:18.869000 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:39:18.869012 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:39:18.869021 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:39:18.869032 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:39:18.869043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:39:18.869054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:39:18.869066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:39:18.869075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:39:18.869091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:39:18.869102 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:39:18.869113 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:39:18.869124 kernel: TSC deadline timer available Jan 17 00:39:18.869134 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:39:18.869145 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:39:18.869156 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:39:18.869167 kernel: kvm-guest: setup PV sched yield Jan 17 00:39:18.869177 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:39:18.869192 kernel: Booting paravirtualized kernel on KVM Jan 17 00:39:18.869203 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:39:18.869215 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:39:18.869226 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:39:18.869236 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:39:18.869248 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:39:18.869260 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:39:18.869270 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:39:18.869281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:39:18.869300 kernel: random: crng init done Jan 17 00:39:18.869310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:39:18.869320 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:39:18.869330 kernel: Fallback order for Node 0: 0 Jan 17 00:39:18.869345 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:39:18.869412 kernel: Policy zone: DMA32 Jan 17 00:39:18.869426 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:39:18.869437 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:39:18.869483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:39:18.869929 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:39:18.869951 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:39:18.870027 kernel: Dynamic Preempt: voluntary Jan 17 00:39:18.870040 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:39:18.870068 kernel: rcu: RCU event tracing is enabled. Jan 17 00:39:18.870082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:39:18.870093 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:39:18.870103 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:39:18.870119 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:39:18.870133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:39:18.870144 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:39:18.870161 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:39:18.870172 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:39:18.870184 kernel: Console: colour dummy device 80x25 Jan 17 00:39:18.870196 kernel: printk: console [ttyS0] enabled Jan 17 00:39:18.870206 kernel: ACPI: Core revision 20230628 Jan 17 00:39:18.870222 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:39:18.870234 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:39:18.870246 kernel: x2apic enabled Jan 17 00:39:18.870260 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:39:18.870270 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:39:18.870281 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:39:18.870293 kernel: kvm-guest: setup PV IPIs Jan 17 00:39:18.870305 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:39:18.870318 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:39:18.870339 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:39:18.870395 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:39:18.870410 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:39:18.870422 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:39:18.870432 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:39:18.870444 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:39:18.870487 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:39:18.870499 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:39:18.870512 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:39:18.870529 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:39:18.870542 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:39:18.870552 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:39:18.870563 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:39:18.870575 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:39:18.870587 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:39:18.870600 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:39:18.870610 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:39:18.870626 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:39:18.870637 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:39:18.870650 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:39:18.870661 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:39:18.870672 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:39:18.870683 kernel: landlock: Up and running. Jan 17 00:39:18.870695 kernel: SELinux: Initializing. Jan 17 00:39:18.870707 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:39:18.870718 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:39:18.870735 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:39:18.870750 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:39:18.870763 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:39:18.870773 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:39:18.870784 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:39:18.870795 kernel: signal: max sigframe size: 1776 Jan 17 00:39:18.870827 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:39:18.870839 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:39:18.870851 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:39:18.870867 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:39:18.870879 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:39:18.870890 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:39:18.870901 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:39:18.870914 kernel: smpboot: Max logical packages: 1 Jan 17 00:39:18.870924 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:39:18.870935 kernel: devtmpfs: initialized Jan 17 00:39:18.870947 kernel: x86/mm: Memory block size: 128MB Jan 17 00:39:18.870961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:39:18.870978 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:39:18.870989 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:39:18.871001 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:39:18.871013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:39:18.871024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:39:18.871038 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:39:18.871048 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:39:18.871058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:39:18.871069 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:39:18.871086 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:39:18.871097 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:39:18.871108 kernel: audit: type=2000 audit(1768610353.547:1): state=initialized audit_enabled=0 res=1 Jan 17 00:39:18.871119 kernel: cpuidle: using governor menu Jan 17 00:39:18.871131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:39:18.871144 kernel: dca service started, version 1.12.1 Jan 17 00:39:18.871154 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:39:18.871166 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:39:18.871181 kernel: PCI: Using configuration type 1 for base access Jan 17 00:39:18.871194 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:39:18.871206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:39:18.871216 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:39:18.871227 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:39:18.871239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:39:18.871251 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:39:18.871264 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:39:18.871274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:39:18.871291 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:39:18.871302 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:39:18.871312 kernel: ACPI: Interpreter enabled Jan 17 00:39:18.871324 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:39:18.871335 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:39:18.871347 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:39:18.871406 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:39:18.871420 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:39:18.871432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:39:18.872627 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:39:18.874006 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:39:18.874196 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:39:18.874215 kernel: PCI host bridge to bus 0000:00 Jan 17 00:39:18.874568 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:39:18.877703 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:39:18.877910 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:39:18.878091 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:39:18.878257 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:39:18.878508 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:39:18.878678 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:39:18.878961 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:39:18.879203 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:39:18.882994 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:39:18.883250 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:39:18.883547 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:39:18.883750 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:39:18.883954 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:39:18.884224 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:39:18.884683 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:39:18.884901 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:39:18.886746 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:39:18.886978 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:39:18.887145 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:39:18.887519 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:39:18.887682 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:39:18.887936 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:39:18.888115 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:39:18.888427 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:39:18.888626 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:39:18.888789 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:39:18.889134 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:39:18.889521 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:39:18.889724 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:39:18.889898 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:39:18.890054 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:39:18.890404 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:39:18.890608 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:39:18.890623 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:39:18.890634 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:39:18.890645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:39:18.890661 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:39:18.890671 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:39:18.890682 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:39:18.890692 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:39:18.890702 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:39:18.890712 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:39:18.890722 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:39:18.890733 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:39:18.890743 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:39:18.890756 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:39:18.890767 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:39:18.890777 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:39:18.890787 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:39:18.890797 kernel: iommu: Default domain type: Translated Jan 17 00:39:18.890807 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:39:18.890817 kernel: efivars: Registered efivars operations Jan 17 00:39:18.890827 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:39:18.890838 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:39:18.890854 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:39:18.890865 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:39:18.890876 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:39:18.890886 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:39:18.891043 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:39:18.891315 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:39:18.891556 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:39:18.891572 kernel: vgaarb: loaded Jan 17 00:39:18.891590 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:39:18.891621 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:39:18.891632 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:39:18.891643 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:39:18.891653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:39:18.891664 kernel: pnp: PnP ACPI init Jan 17 00:39:18.891902 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:39:18.891919 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:39:18.891930 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:39:18.891946 kernel: NET: Registered PF_INET protocol family Jan 17 00:39:18.891956 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:39:18.891967 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:39:18.891977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:39:18.891988 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:39:18.891998 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:39:18.892008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:39:18.892018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:39:18.892032 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:39:18.892042 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:39:18.892053 kernel: NET: Registered PF_XDP protocol family Jan 17 00:39:18.892325 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:39:18.892568 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:39:18.892717 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:39:18.892856 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:39:18.892994 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:39:18.893140 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:39:18.893484 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:39:18.893637 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:39:18.893651 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:39:18.893661 kernel: Initialise system trusted keyrings Jan 17 00:39:18.893671 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:39:18.893682 kernel: Key type asymmetric registered Jan 17 00:39:18.893692 kernel: Asymmetric key parser 'x509' registered Jan 17 00:39:18.893702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:39:18.893718 kernel: io scheduler mq-deadline registered Jan 17 00:39:18.893729 kernel: io scheduler kyber registered Jan 17 00:39:18.893739 kernel: io scheduler bfq registered Jan 17 00:39:18.893749 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:39:18.893760 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:39:18.893771 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:39:18.893781 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:39:18.893791 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:39:18.893801 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:39:18.893816 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:39:18.893826 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:39:18.893836 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:39:18.894063 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:39:18.894080 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:39:18.894329 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:39:18.894562 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:39:17 UTC (1768610357) Jan 17 00:39:18.894711 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:39:18.894730 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:39:18.894741 kernel: efifb: probing for efifb Jan 17 00:39:18.894810 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:39:18.894821 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:39:18.894831 kernel: efifb: scrolling: redraw Jan 17 00:39:18.894842 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:39:18.894853 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:39:18.894863 kernel: fb0: EFI VGA frame buffer device Jan 17 00:39:18.894874 kernel: pstore: Using crash dump compression: deflate Jan 17 00:39:18.894889 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:39:18.894899 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:39:18.894909 kernel: Segment Routing with IPv6 Jan 17 00:39:18.894920 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:39:18.894930 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:39:18.894940 kernel: Key type dns_resolver registered Jan 17 00:39:18.894950 kernel: IPI shorthand broadcast: enabled Jan 17 00:39:18.894982 kernel: sched_clock: Marking stable (2944018779, 935322092)->(4705563461, -826222590) Jan 17 00:39:18.894996 kernel: registered taskstats version 1 Jan 17 00:39:18.895009 kernel: Loading compiled-in X.509 certificates Jan 17 00:39:18.895020 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:39:18.895031 kernel: Key type .fscrypt registered Jan 17 00:39:18.895042 kernel: Key type fscrypt-provisioning registered Jan 17 00:39:18.895052 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:39:18.895063 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:39:18.895074 kernel: ima: No architecture policies found Jan 17 00:39:18.895084 kernel: clk: Disabling unused clocks Jan 17 00:39:18.895095 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:39:18.895109 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:39:18.895120 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:39:18.895130 kernel: Run /init as init process Jan 17 00:39:18.895141 kernel: with arguments: Jan 17 00:39:18.895244 kernel: /init Jan 17 00:39:18.895258 kernel: with environment: Jan 17 00:39:18.895271 kernel: HOME=/ Jan 17 00:39:18.895283 kernel: TERM=linux Jan 17 00:39:18.895320 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:39:18.895336 systemd[1]: Detected virtualization kvm. Jan 17 00:39:18.895344 systemd[1]: Detected architecture x86-64. Jan 17 00:39:18.895391 systemd[1]: Running in initrd. Jan 17 00:39:18.895400 systemd[1]: No hostname configured, using default hostname. Jan 17 00:39:18.895407 systemd[1]: Hostname set to . Jan 17 00:39:18.895415 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:39:18.895422 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:39:18.895433 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:39:18.895440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:39:18.895471 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:39:18.895479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:39:18.895487 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:39:18.895500 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:39:18.895509 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:39:18.895516 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:39:18.895524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:39:18.895531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:39:18.895539 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:39:18.895549 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:39:18.895556 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:39:18.895564 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:39:18.895571 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:39:18.895579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:39:18.895586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:39:18.895593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:39:18.895600 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:39:18.895608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:39:18.895617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:39:18.895624 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:39:18.895632 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:39:18.895639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:39:18.895646 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:39:18.895654 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:39:18.895661 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:39:18.895668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:39:18.895675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:39:18.895712 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:39:18.895730 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:39:18.895738 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:39:18.895745 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:39:18.895757 systemd-journald[194]: Journal started Jan 17 00:39:18.895773 systemd-journald[194]: Runtime Journal (/run/log/journal/2eeb49dac785401a8256df9307900494) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:39:18.921156 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:39:18.946099 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:39:18.958055 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:39:18.959235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:18.973232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:39:19.044498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:39:19.050632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:39:19.065562 kernel: Bridge firewalling registered Jan 17 00:39:19.051734 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:39:19.063828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:39:19.082172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:39:19.083189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:39:19.129396 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:39:19.175193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:39:19.183941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:39:19.269195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:39:19.288958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:39:19.333044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:39:19.356089 dracut-cmdline[227]: dracut-dracut-053 Jan 17 00:39:19.372481 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:39:19.362838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:39:19.451220 systemd-resolved[239]: Positive Trust Anchors: Jan 17 00:39:19.451258 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:39:19.451304 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:39:19.462071 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 17 00:39:19.467544 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:39:19.493204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:39:19.596551 kernel: SCSI subsystem initialized Jan 17 00:39:19.623626 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:39:19.652947 kernel: iscsi: registered transport (tcp) Jan 17 00:39:19.692936 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:39:19.693026 kernel: QLogic iSCSI HBA Driver Jan 17 00:39:19.907655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:39:19.950213 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:39:20.018483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:39:20.018827 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:39:20.026286 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:39:20.106626 kernel: raid6: avx2x4 gen() 20935 MB/s Jan 17 00:39:20.128592 kernel: raid6: avx2x2 gen() 19507 MB/s Jan 17 00:39:20.149911 kernel: raid6: avx2x1 gen() 11039 MB/s Jan 17 00:39:20.149993 kernel: raid6: using algorithm avx2x4 gen() 20935 MB/s Jan 17 00:39:20.172070 kernel: raid6: .... xor() 4360 MB/s, rmw enabled Jan 17 00:39:20.172165 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:39:20.207576 kernel: xor: automatically using best checksumming function avx Jan 17 00:39:20.584838 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:39:20.614745 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:39:20.639775 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:39:20.667276 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 17 00:39:20.679933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:39:20.698839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:39:20.751518 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 17 00:39:20.824612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:39:20.858640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:39:20.997603 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:39:21.025577 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:39:21.053500 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:39:21.058750 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:39:21.070249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:39:21.085293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:39:21.109488 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:39:21.110642 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:39:21.147240 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:39:21.147269 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:39:21.147695 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:39:21.147712 kernel: GPT:9289727 != 19775487 Jan 17 00:39:21.147727 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:39:21.149562 kernel: GPT:9289727 != 19775487 Jan 17 00:39:21.151522 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:39:21.159323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:39:21.153493 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:39:21.162716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:39:21.162922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:39:21.172568 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:39:21.184423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:39:21.220881 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (464) Jan 17 00:39:21.184685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:21.239071 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Jan 17 00:39:21.197584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:39:21.249733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:39:21.285114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:21.297899 kernel: libata version 3.00 loaded. Jan 17 00:39:21.316005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:39:21.353698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:39:21.382882 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:39:21.398439 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:39:21.398540 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:39:21.398557 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:39:21.400447 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:39:21.404806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:39:21.424744 kernel: scsi host0: ahci Jan 17 00:39:21.425026 kernel: scsi host1: ahci Jan 17 00:39:21.425230 kernel: scsi host2: ahci Jan 17 00:39:21.425548 kernel: scsi host3: ahci Jan 17 00:39:21.441708 kernel: scsi host4: ahci Jan 17 00:39:21.441942 kernel: scsi host5: ahci Jan 17 00:39:21.442099 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:39:21.442111 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:39:21.442120 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:39:21.442130 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:39:21.442139 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:39:21.439674 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:39:21.484180 kernel: AES CTR mode by8 optimization enabled Jan 17 00:39:21.484210 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:39:21.491292 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:39:21.511742 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:39:21.520098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:39:21.561077 disk-uuid[549]: Primary Header is updated. Jan 17 00:39:21.561077 disk-uuid[549]: Secondary Entries is updated. Jan 17 00:39:21.561077 disk-uuid[549]: Secondary Header is updated. Jan 17 00:39:21.568954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:39:21.579596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:39:21.613772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:39:21.613805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:39:21.768684 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:39:21.768915 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:39:21.781033 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:39:21.786757 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:39:21.797823 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:39:21.814324 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:39:21.814449 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:39:21.817502 kernel: ata3.00: applying bridge limits Jan 17 00:39:21.829828 kernel: ata3.00: configured for UDMA/100 Jan 17 00:39:21.829979 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:39:22.029749 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:39:22.030323 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:39:22.063918 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:39:22.615985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:39:22.616895 disk-uuid[561]: The operation has completed successfully. Jan 17 00:39:22.719127 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:39:22.719809 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:39:22.747443 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:39:22.774700 sh[598]: Success Jan 17 00:39:22.832955 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:39:22.951677 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:39:22.961832 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:39:22.971287 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:39:23.019901 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:39:23.019977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:39:23.026848 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:39:23.026896 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:39:23.029431 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:39:23.070092 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:39:23.079152 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:39:23.093689 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:39:23.111061 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:39:23.142718 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:39:23.142755 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:39:23.142786 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:39:23.153453 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:39:23.172101 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:39:23.181447 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:39:23.197588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:39:23.205737 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:39:23.287581 ignition[688]: Ignition 2.19.0 Jan 17 00:39:23.287618 ignition[688]: Stage: fetch-offline Jan 17 00:39:23.287684 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:23.287703 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:23.287822 ignition[688]: parsed url from cmdline: "" Jan 17 00:39:23.287829 ignition[688]: no config URL provided Jan 17 00:39:23.287840 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:39:23.287855 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:39:23.287895 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 17 00:39:23.287905 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:39:23.327267 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 17 00:39:23.328654 ignition[688]: parsing config with SHA512: 4b3e3b6a2536c31d4819fabecfa05d062fff9731c04c5d98600bdde82681aa7c8a1cf1fc4d818b779975af374df30a254423880e762e811c5a89fe06c20d4e20 Jan 17 00:39:23.331138 unknown[688]: fetched base config from "system" Jan 17 00:39:23.331146 unknown[688]: fetched user config from "qemu" Jan 17 00:39:23.331350 ignition[688]: fetch-offline: fetch-offline passed Jan 17 00:39:23.334647 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:39:23.331517 ignition[688]: Ignition finished successfully Jan 17 00:39:23.388077 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:39:23.410755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:39:23.459143 systemd-networkd[787]: lo: Link UP Jan 17 00:39:23.459167 systemd-networkd[787]: lo: Gained carrier Jan 17 00:39:23.461272 systemd-networkd[787]: Enumeration completed Jan 17 00:39:23.461516 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:39:23.462249 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:23.462255 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:39:23.463852 systemd-networkd[787]: eth0: Link UP Jan 17 00:39:23.463857 systemd-networkd[787]: eth0: Gained carrier Jan 17 00:39:23.463866 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:23.499322 systemd[1]: Reached target network.target - Network. Jan 17 00:39:23.505879 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:39:23.530512 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:39:23.543758 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:39:23.615546 ignition[789]: Ignition 2.19.0 Jan 17 00:39:23.615580 ignition[789]: Stage: kargs Jan 17 00:39:23.615836 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:23.615853 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:23.634115 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:39:23.617313 ignition[789]: kargs: kargs passed Jan 17 00:39:23.617442 ignition[789]: Ignition finished successfully Jan 17 00:39:23.662633 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:39:23.692028 ignition[798]: Ignition 2.19.0 Jan 17 00:39:23.692054 ignition[798]: Stage: disks Jan 17 00:39:23.692858 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:23.692873 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:23.693767 ignition[798]: disks: disks passed Jan 17 00:39:23.693823 ignition[798]: Ignition finished successfully Jan 17 00:39:23.711868 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:39:23.719164 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:39:23.722863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:39:23.727739 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:39:23.731593 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:39:23.740773 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:39:23.765696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:39:23.806173 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:39:23.811956 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:39:23.836158 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:39:24.048584 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:39:24.049916 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:39:24.050744 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:39:24.079546 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:39:24.088753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:39:24.101818 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jan 17 00:39:24.091769 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:39:24.091839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:39:24.131116 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:39:24.131148 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:39:24.131164 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:39:24.131178 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:39:24.091876 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:39:24.103771 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:39:24.140931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:39:24.160864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:39:24.250419 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:39:24.259830 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:39:24.273944 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:39:24.280581 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:39:24.474447 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:39:24.498018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:39:24.521744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:39:24.539700 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:39:24.549406 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:39:24.592243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:39:24.609093 ignition[929]: INFO : Ignition 2.19.0 Jan 17 00:39:24.609093 ignition[929]: INFO : Stage: mount Jan 17 00:39:24.624683 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:24.624683 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:24.624683 ignition[929]: INFO : mount: mount passed Jan 17 00:39:24.624683 ignition[929]: INFO : Ignition finished successfully Jan 17 00:39:24.618928 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:39:24.638743 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:39:25.062653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:39:25.088811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 17 00:39:25.097808 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:39:25.097870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:39:25.101580 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:39:25.119401 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:39:25.123605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:39:25.192615 ignition[959]: INFO : Ignition 2.19.0 Jan 17 00:39:25.206970 ignition[959]: INFO : Stage: files Jan 17 00:39:25.211013 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:25.211013 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:25.211013 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:39:25.239122 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:39:25.239122 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:39:25.239122 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:39:25.239122 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:39:25.239122 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:39:25.235288 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:39:25.268523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:39:25.302561 systemd-networkd[787]: eth0: Gained IPv6LL Jan 17 00:39:25.621983 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 00:39:26.530745 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:39:26.555068 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 17 00:39:26.563113 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:39:26.563113 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:39:26.563113 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 17 00:39:26.563113 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:39:26.815004 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:39:26.845668 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:39:26.845668 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:39:26.845668 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:39:26.845668 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:39:26.845668 ignition[959]: INFO : files: files passed Jan 17 00:39:26.902207 ignition[959]: INFO : Ignition finished successfully Jan 17 00:39:26.883438 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:39:26.915440 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:39:26.948421 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:39:26.964003 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:39:26.964174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:39:26.998699 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:39:26.994898 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:39:27.026032 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:39:27.026032 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:39:27.021246 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:39:27.111236 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:39:27.127301 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:39:27.259039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:39:27.259223 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:39:27.271296 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:39:27.291839 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:39:27.295246 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:39:27.332159 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:39:27.414979 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:39:27.455767 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:39:27.485689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:39:27.504056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:39:27.510227 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:39:27.523620 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:39:27.524038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:39:27.584578 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:39:27.617073 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:39:27.629635 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:39:27.673064 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:39:27.698152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:39:27.766195 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:39:27.781330 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:39:27.787827 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:39:27.809877 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:39:27.815078 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:39:27.834136 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:39:27.835078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:39:27.853779 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:39:27.873692 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:39:27.884830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:39:27.885320 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:39:27.907262 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:39:27.907543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:39:27.970855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:39:27.971124 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:39:27.995016 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:39:28.297263 ignition[1013]: INFO : Ignition 2.19.0 Jan 17 00:39:28.297263 ignition[1013]: INFO : Stage: umount Jan 17 00:39:28.297263 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:39:28.297263 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:39:28.297263 ignition[1013]: INFO : umount: umount passed Jan 17 00:39:28.297263 ignition[1013]: INFO : Ignition finished successfully Jan 17 00:39:27.999159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:39:27.999799 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:39:28.008261 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:39:28.053277 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:39:28.074522 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:39:28.074697 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:39:28.074874 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:39:28.074975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:39:28.075165 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:39:28.075327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:39:28.075592 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:39:28.075715 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:39:28.165302 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:39:28.167142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:39:28.167445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:39:28.175613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:39:28.199409 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:39:28.204448 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:39:28.208817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:39:28.208990 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:39:28.219987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:39:28.220225 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:39:28.230394 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:39:28.230816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:39:28.235225 systemd[1]: Stopped target network.target - Network. Jan 17 00:39:28.235283 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:39:28.236025 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:39:28.236118 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:39:28.236174 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:39:28.236250 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:39:28.236307 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:39:28.236621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:39:28.236685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:39:28.236945 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:39:28.237297 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:39:28.262893 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:39:28.298626 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:39:28.298829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:39:28.307795 systemd-networkd[787]: eth0: DHCPv6 lease lost Jan 17 00:39:28.308101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:39:28.308227 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:39:28.316714 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:39:28.317902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:39:28.578717 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:39:28.581568 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:39:28.604050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:39:28.604155 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:39:28.615109 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:39:28.615214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:39:28.659286 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:39:28.659523 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:39:28.659609 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:39:28.662095 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:39:28.662177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:39:28.663570 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:39:28.663636 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:39:28.663965 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:39:28.679438 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:39:28.679657 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:39:28.714261 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:39:28.716991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:39:28.767778 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:39:28.767867 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:39:28.769118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:39:28.769165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:39:28.779054 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:39:28.779125 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:39:28.794064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:39:28.794150 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:39:28.802571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:39:28.802660 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:39:28.831281 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:39:28.838738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:39:28.838833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:39:28.844708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:39:28.844778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:28.854743 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:39:28.854893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:39:28.863760 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:39:28.896968 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:39:28.927931 systemd[1]: Switching root. Jan 17 00:39:28.986159 systemd-journald[194]: Journal stopped Jan 17 00:39:32.123449 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:39:32.128445 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:39:32.128500 kernel: SELinux: policy capability open_perms=1 Jan 17 00:39:32.128519 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:39:32.128535 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:39:32.128549 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:39:32.128564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:39:32.128579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:39:32.128600 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:39:32.128638 kernel: audit: type=1403 audit(1768610369.351:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:39:32.128666 systemd[1]: Successfully loaded SELinux policy in 98.335ms. Jan 17 00:39:32.128690 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.807ms. Jan 17 00:39:32.128707 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:39:32.128723 systemd[1]: Detected virtualization kvm. Jan 17 00:39:32.128739 systemd[1]: Detected architecture x86-64. Jan 17 00:39:32.128755 systemd[1]: Detected first boot. Jan 17 00:39:32.128770 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:39:32.128786 zram_generator::config[1057]: No configuration found. Jan 17 00:39:32.128825 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:39:32.128845 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:39:32.128861 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:39:32.128899 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:39:32.128918 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:39:32.128959 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:39:32.128976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:39:32.128992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:39:32.129008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:39:32.129025 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:39:32.129054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:39:32.129077 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:39:32.129094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:39:32.129111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:39:32.129152 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:39:32.129169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:39:32.129186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:39:32.129202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:39:32.129218 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:39:32.129234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:39:32.129250 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:39:32.129267 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:39:32.129306 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:39:32.129324 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:39:32.129340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:39:32.129403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:39:32.129422 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:39:32.129438 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:39:32.129454 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:39:32.129470 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:39:32.129541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:39:32.129559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:39:32.129575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:39:32.129591 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:39:32.129607 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:39:32.129623 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:39:32.129639 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:39:32.129656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:32.129679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:39:32.129717 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:39:32.129734 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:39:32.129751 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:39:32.129768 systemd[1]: Reached target machines.target - Containers. Jan 17 00:39:32.129787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:39:32.129804 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:39:32.129821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:39:32.129837 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:39:32.129853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:39:32.129891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:39:32.129907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:39:32.129923 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:39:32.129939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:39:32.129957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:39:32.129973 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:39:32.129990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:39:32.130006 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:39:32.130042 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:39:32.130058 kernel: fuse: init (API version 7.39) Jan 17 00:39:32.130075 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:39:32.130091 kernel: loop: module loaded Jan 17 00:39:32.130107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:39:32.130170 systemd-journald[1141]: Collecting audit messages is disabled. Jan 17 00:39:32.130202 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:39:32.130219 systemd-journald[1141]: Journal started Jan 17 00:39:32.130280 systemd-journald[1141]: Runtime Journal (/run/log/journal/2eeb49dac785401a8256df9307900494) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:39:30.792125 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:39:30.827877 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:39:30.830172 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:39:30.843798 systemd[1]: systemd-journald.service: Consumed 1.542s CPU time. Jan 17 00:39:32.172467 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:39:32.188664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:39:32.207897 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:39:32.238912 systemd[1]: Stopped verity-setup.service. Jan 17 00:39:32.252959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:32.284700 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:39:32.268051 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:39:32.271864 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:39:32.275769 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:39:32.279251 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:39:32.283583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:39:32.293679 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:39:32.313170 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:39:32.318471 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:39:32.340184 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:39:32.340780 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:39:32.345857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:39:32.346157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:39:32.351096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:39:32.351654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:39:32.356830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:39:32.357045 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:39:32.373001 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:39:32.373898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:39:32.385936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:39:32.396308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:39:32.406836 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:39:32.424448 kernel: ACPI: bus type drm_connector registered Jan 17 00:39:32.429211 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:39:32.429633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:39:32.452793 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:39:32.472068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:39:32.504685 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:39:32.523601 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:39:32.523672 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:39:32.537187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:39:32.552756 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:39:32.563818 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:39:32.573228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:32.583721 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:39:32.611761 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:39:32.621871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:39:32.627586 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:39:32.640598 systemd-journald[1141]: Time spent on flushing to /var/log/journal/2eeb49dac785401a8256df9307900494 is 121.279ms for 962 entries. Jan 17 00:39:32.640598 systemd-journald[1141]: System Journal (/var/log/journal/2eeb49dac785401a8256df9307900494) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:39:32.962545 systemd-journald[1141]: Received client request to flush runtime journal. Jan 17 00:39:32.962614 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:39:32.649939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:39:32.656556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:39:32.694009 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:39:32.755071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:39:32.805926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:39:32.845917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:39:32.885941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:39:32.899200 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:39:32.927680 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:39:32.971068 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:39:32.988251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:39:33.020433 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:39:33.038465 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:39:33.063447 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:39:33.071217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:39:33.082557 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:39:33.164705 kernel: hrtimer: interrupt took 15505124 ns Jan 17 00:39:33.255455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:39:33.264685 kernel: loop1: detected capacity change from 0 to 229808 Jan 17 00:39:33.267412 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:39:33.278735 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:39:33.289816 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:39:33.369125 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:39:33.373337 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 17 00:39:33.373419 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 17 00:39:33.425583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:39:33.591461 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:39:33.718343 kernel: loop4: detected capacity change from 0 to 229808 Jan 17 00:39:33.839821 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:39:33.907350 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:39:33.910565 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 17 00:39:33.946228 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:39:33.946255 systemd[1]: Reloading... Jan 17 00:39:34.130424 zram_generator::config[1221]: No configuration found. Jan 17 00:39:34.408055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:34.489528 systemd[1]: Reloading finished in 542 ms. Jan 17 00:39:34.549564 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:39:34.560181 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:39:34.576692 systemd[1]: Starting ensure-sysext.service... Jan 17 00:39:34.599076 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:39:34.604763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:39:34.609033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:39:34.640659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:39:34.648005 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:39:34.648028 systemd[1]: Reloading... Jan 17 00:39:34.672933 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:39:34.674789 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:39:34.676104 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:39:34.676584 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 17 00:39:34.676735 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 17 00:39:34.704218 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:39:34.704238 systemd-tmpfiles[1258]: Skipping /boot Jan 17 00:39:34.730109 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:39:34.730153 systemd-tmpfiles[1258]: Skipping /boot Jan 17 00:39:34.773411 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Jan 17 00:39:34.788564 zram_generator::config[1283]: No configuration found. Jan 17 00:39:35.094426 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1315) Jan 17 00:39:35.113232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:35.267608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:39:35.274673 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:39:35.317947 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:39:35.318601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:39:35.330559 systemd[1]: Reloading finished in 681 ms. Jan 17 00:39:35.370014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:39:35.398456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:39:35.485531 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:39:35.492412 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:39:35.514467 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:39:35.515394 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:39:35.564266 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:39:35.573393 systemd[1]: Finished ensure-sysext.service. Jan 17 00:39:35.633262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:35.663233 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:39:35.658682 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:35.701738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:39:35.707759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:39:35.710880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:39:35.719306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:39:35.739311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:39:35.748624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:39:35.757066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:39:35.767782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:39:35.774086 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:39:35.783213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:39:35.801731 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:39:35.827698 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:39:35.846169 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:39:35.853590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:39:35.864103 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:39:35.873768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:39:35.877586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:39:35.889054 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:39:35.889652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:39:35.894593 augenrules[1385]: No rules Jan 17 00:39:35.897164 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:35.905560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:39:35.905999 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:39:35.919143 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:39:35.919534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:39:35.926722 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:39:35.965016 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:39:35.967273 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:39:35.985995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:39:35.986112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:39:35.997087 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:39:36.008040 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:39:36.017172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:39:36.022154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:39:36.034892 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:39:36.162138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:39:36.244647 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:39:36.560177 systemd-networkd[1372]: lo: Link UP Jan 17 00:39:36.577938 systemd-networkd[1372]: lo: Gained carrier Jan 17 00:39:36.586659 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:39:36.595836 systemd-networkd[1372]: Enumeration completed Jan 17 00:39:36.598824 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:36.598830 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:39:36.603337 systemd-networkd[1372]: eth0: Link UP Jan 17 00:39:36.603345 systemd-networkd[1372]: eth0: Gained carrier Jan 17 00:39:36.603572 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:39:36.606784 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:39:36.623157 kernel: kvm_amd: TSC scaling supported Jan 17 00:39:36.630340 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:39:36.630450 kernel: kvm_amd: Nested Paging enabled Jan 17 00:39:36.630477 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:39:36.630569 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:39:36.629830 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:39:36.698535 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:39:36.699324 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:39:36.701625 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jan 17 00:39:36.706553 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:39:36.706642 systemd-timesyncd[1380]: Initial clock synchronization to Sat 2026-01-17 00:39:36.930906 UTC. Jan 17 00:39:36.738712 systemd-resolved[1373]: Positive Trust Anchors: Jan 17 00:39:36.738729 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:39:36.738774 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:39:36.745274 systemd-resolved[1373]: Defaulting to hostname 'linux'. Jan 17 00:39:36.748868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:39:36.753837 systemd[1]: Reached target network.target - Network. Jan 17 00:39:36.757329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:39:36.873084 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:39:36.950135 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:39:36.969031 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:39:37.013545 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:39:37.061234 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:39:37.076234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:39:37.097900 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:39:37.109328 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:39:37.120865 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:39:37.141082 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:39:37.148283 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:39:37.162000 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:39:37.168113 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:39:37.171098 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:39:37.179771 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:39:37.187070 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:39:37.196351 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:39:37.213111 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:39:37.222163 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:39:37.230728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:39:37.249218 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:39:37.260792 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:39:37.267606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:39:37.267678 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:39:37.286598 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:39:37.321336 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:39:37.322089 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:39:37.340229 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:39:37.376498 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:39:37.383714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:39:37.392460 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:39:37.407472 jq[1426]: false Jan 17 00:39:37.403785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:39:37.418704 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:39:37.432507 extend-filesystems[1427]: Found loop3 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found loop4 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found loop5 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found sr0 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda1 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda2 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda3 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found usr Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda4 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda6 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda7 Jan 17 00:39:37.432507 extend-filesystems[1427]: Found vda9 Jan 17 00:39:37.432507 extend-filesystems[1427]: Checking size of /dev/vda9 Jan 17 00:39:37.714516 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:39:37.714629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1307) Jan 17 00:39:37.714655 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:39:37.436263 dbus-daemon[1425]: [system] SELinux support is enabled Jan 17 00:39:37.468102 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:39:37.715536 extend-filesystems[1427]: Resized partition /dev/vda9 Jan 17 00:39:37.481800 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:39:37.722279 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:39:37.487018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:39:37.743753 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:39:37.743753 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:39:37.743753 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:39:37.510112 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:39:37.799152 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Jan 17 00:39:37.536228 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:39:37.815724 update_engine[1443]: I20260117 00:39:37.657136 1443 main.cc:92] Flatcar Update Engine starting Jan 17 00:39:37.815724 update_engine[1443]: I20260117 00:39:37.661509 1443 update_check_scheduler.cc:74] Next update check in 3m19s Jan 17 00:39:37.547927 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:39:37.816663 jq[1446]: true Jan 17 00:39:37.563064 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:39:37.591159 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:39:37.817441 jq[1453]: true Jan 17 00:39:37.591595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:39:37.593308 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:39:37.594211 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:39:37.611334 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:39:37.611623 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:39:37.695043 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:39:37.736196 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:39:37.738889 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:39:37.754822 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:39:37.794571 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:39:37.803903 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:39:37.803935 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:39:37.811131 systemd-logind[1436]: New seat seat0. Jan 17 00:39:37.812821 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:39:37.812982 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:39:37.822911 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:39:37.823273 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:39:37.839173 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:39:37.853428 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:39:37.858087 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:39:37.891533 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 17 00:39:37.895506 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:39:37.898077 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:39:37.908196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:39:37.916067 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:39:37.961548 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:39:37.988438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:38.006678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:39:38.012214 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:39:38.013945 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:39:38.016542 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:39:38.073022 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:39:38.082996 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Jan 17 00:39:38.103154 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:39:38.104682 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:39:38.125901 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:39:38.135802 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:39:38.137495 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:39:38.147247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:39:38.153325 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:39:38.219782 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:39:38.243235 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:39:38.259679 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:39:38.264829 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:39:38.362135 sshd[1503]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:38.371990 sshd[1503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:38.395289 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:39:38.408913 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:39:38.416324 systemd-logind[1436]: New session 1 of user core. Jan 17 00:39:38.441865 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:39:38.459245 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:39:38.485898 (systemd)[1528]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:39:38.528594 containerd[1450]: time="2026-01-17T00:39:38.527891264Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:39:38.639938 containerd[1450]: time="2026-01-17T00:39:38.639639820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.646626894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.646675754Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.646698592Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647037216Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647058064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647138528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647156287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647462915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647484542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647503922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648076 containerd[1450]: time="2026-01-17T00:39:38.647517461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.648533 containerd[1450]: time="2026-01-17T00:39:38.647653346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.649828 containerd[1450]: time="2026-01-17T00:39:38.648680619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:39:38.650151 containerd[1450]: time="2026-01-17T00:39:38.650123304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:39:38.650225 containerd[1450]: time="2026-01-17T00:39:38.650209292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:39:38.650504 containerd[1450]: time="2026-01-17T00:39:38.650480466Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:39:38.650662 containerd[1450]: time="2026-01-17T00:39:38.650636275Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:39:38.665044 containerd[1450]: time="2026-01-17T00:39:38.664942951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:39:38.665161 containerd[1450]: time="2026-01-17T00:39:38.665058974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:39:38.665161 containerd[1450]: time="2026-01-17T00:39:38.665096667Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:39:38.665161 containerd[1450]: time="2026-01-17T00:39:38.665120172Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:39:38.665161 containerd[1450]: time="2026-01-17T00:39:38.665140927Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:39:38.665358 containerd[1450]: time="2026-01-17T00:39:38.665329840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:39:38.666112 containerd[1450]: time="2026-01-17T00:39:38.666039566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:39:38.666462 containerd[1450]: time="2026-01-17T00:39:38.666357988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:39:38.666462 containerd[1450]: time="2026-01-17T00:39:38.666446542Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:39:38.666526 containerd[1450]: time="2026-01-17T00:39:38.666467707Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:39:38.666526 containerd[1450]: time="2026-01-17T00:39:38.666488452Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666526 containerd[1450]: time="2026-01-17T00:39:38.666511445Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666608 containerd[1450]: time="2026-01-17T00:39:38.666532385Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666608 containerd[1450]: time="2026-01-17T00:39:38.666557872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666608 containerd[1450]: time="2026-01-17T00:39:38.666589611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666678 containerd[1450]: time="2026-01-17T00:39:38.666608344Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666678 containerd[1450]: time="2026-01-17T00:39:38.666627786Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666725 containerd[1450]: time="2026-01-17T00:39:38.666648439Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:39:38.666725 containerd[1450]: time="2026-01-17T00:39:38.666706742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666822 containerd[1450]: time="2026-01-17T00:39:38.666726091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666822 containerd[1450]: time="2026-01-17T00:39:38.666742586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666822 containerd[1450]: time="2026-01-17T00:39:38.666791775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666822 containerd[1450]: time="2026-01-17T00:39:38.666816636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666930 containerd[1450]: time="2026-01-17T00:39:38.666840902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666930 containerd[1450]: time="2026-01-17T00:39:38.666859769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666930 containerd[1450]: time="2026-01-17T00:39:38.666879928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.666930 containerd[1450]: time="2026-01-17T00:39:38.666899123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667021 containerd[1450]: time="2026-01-17T00:39:38.666925771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667021 containerd[1450]: time="2026-01-17T00:39:38.666948086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667021 containerd[1450]: time="2026-01-17T00:39:38.666970166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667021 containerd[1450]: time="2026-01-17T00:39:38.666989648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667021 containerd[1450]: time="2026-01-17T00:39:38.667009623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:39:38.667144 containerd[1450]: time="2026-01-17T00:39:38.667037995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667144 containerd[1450]: time="2026-01-17T00:39:38.667071355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667192 containerd[1450]: time="2026-01-17T00:39:38.667137522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:39:38.667280 containerd[1450]: time="2026-01-17T00:39:38.667217452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:39:38.667280 containerd[1450]: time="2026-01-17T00:39:38.667248688Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:39:38.667280 containerd[1450]: time="2026-01-17T00:39:38.667266394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:39:38.667353 containerd[1450]: time="2026-01-17T00:39:38.667283117Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:39:38.667353 containerd[1450]: time="2026-01-17T00:39:38.667296964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.667353 containerd[1450]: time="2026-01-17T00:39:38.667329779Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:39:38.667533 containerd[1450]: time="2026-01-17T00:39:38.667350760Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:39:38.667533 containerd[1450]: time="2026-01-17T00:39:38.667369628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:39:38.668308 containerd[1450]: time="2026-01-17T00:39:38.668133559Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:39:38.668308 containerd[1450]: time="2026-01-17T00:39:38.668248544Z" level=info msg="Connect containerd service" Jan 17 00:39:38.668308 containerd[1450]: time="2026-01-17T00:39:38.668340783Z" level=info msg="using legacy CRI server" Jan 17 00:39:38.668308 containerd[1450]: time="2026-01-17T00:39:38.668358232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:39:38.669088 containerd[1450]: time="2026-01-17T00:39:38.668530208Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:39:38.670468 containerd[1450]: time="2026-01-17T00:39:38.670099346Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:39:38.670468 containerd[1450]: time="2026-01-17T00:39:38.670309831Z" level=info msg="Start subscribing containerd event" Jan 17 00:39:38.670468 containerd[1450]: time="2026-01-17T00:39:38.670462396Z" level=info msg="Start recovering state" Jan 17 00:39:38.670581 containerd[1450]: time="2026-01-17T00:39:38.670549225Z" level=info msg="Start event monitor" Jan 17 00:39:38.670581 containerd[1450]: time="2026-01-17T00:39:38.670576991Z" level=info msg="Start snapshots syncer" Jan 17 00:39:38.670657 containerd[1450]: time="2026-01-17T00:39:38.670589699Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:39:38.670842 containerd[1450]: time="2026-01-17T00:39:38.670789522Z" level=info msg="Start streaming server" Jan 17 00:39:38.671121 containerd[1450]: time="2026-01-17T00:39:38.671095732Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:39:38.671312 containerd[1450]: time="2026-01-17T00:39:38.671289468Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:39:38.671972 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:39:38.672888 containerd[1450]: time="2026-01-17T00:39:38.672848095Z" level=info msg="containerd successfully booted in 0.147245s" Jan 17 00:39:38.798886 systemd[1528]: Queued start job for default target default.target. Jan 17 00:39:38.814059 systemd[1528]: Created slice app.slice - User Application Slice. Jan 17 00:39:38.814128 systemd[1528]: Reached target paths.target - Paths. Jan 17 00:39:38.814150 systemd[1528]: Reached target timers.target - Timers. Jan 17 00:39:38.818113 systemd[1528]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:39:38.848256 systemd[1528]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:39:38.848535 systemd[1528]: Reached target sockets.target - Sockets. Jan 17 00:39:38.848565 systemd[1528]: Reached target basic.target - Basic System. Jan 17 00:39:38.848795 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:39:38.848970 systemd[1528]: Reached target default.target - Main User Target. Jan 17 00:39:38.849026 systemd[1528]: Startup finished in 326ms. Jan 17 00:39:38.867764 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:39:39.017870 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Jan 17 00:39:39.223827 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:39.229011 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:39.244875 systemd-logind[1436]: New session 2 of user core. Jan 17 00:39:39.255522 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:39:39.356412 sshd[1543]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:39.386755 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:38930.service: Deactivated successfully. Jan 17 00:39:39.399721 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:39:39.406209 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:39:39.408883 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:38934.service - OpenSSH per-connection server daemon (10.0.0.1:38934). Jan 17 00:39:39.416128 systemd-logind[1436]: Removed session 2. Jan 17 00:39:39.499335 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 38934 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:39.507278 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:39.529032 systemd-logind[1436]: New session 3 of user core. Jan 17 00:39:39.558229 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:39:39.672609 sshd[1550]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:39.683657 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:38934.service: Deactivated successfully. Jan 17 00:39:39.686820 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:39:39.692718 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:39:39.701208 systemd-logind[1436]: Removed session 3. Jan 17 00:39:41.694766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:41.707082 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:39:41.715308 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:41.725732 systemd[1]: Startup finished in 3.266s (kernel) + 11.342s (initrd) + 12.470s (userspace) = 27.080s. Jan 17 00:39:44.408125 kubelet[1561]: E0117 00:39:44.406888 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:44.423313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:44.423676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:44.426961 systemd[1]: kubelet.service: Consumed 3.699s CPU time. Jan 17 00:39:49.856524 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:46408.service - OpenSSH per-connection server daemon (10.0.0.1:46408). Jan 17 00:39:49.923703 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 46408 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:49.927840 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:49.957260 systemd-logind[1436]: New session 4 of user core. Jan 17 00:39:49.965450 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:39:50.158671 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:50.172842 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:46408.service: Deactivated successfully. Jan 17 00:39:50.176315 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:39:50.189493 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:39:50.208939 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Jan 17 00:39:50.218786 systemd-logind[1436]: Removed session 4. Jan 17 00:39:50.306924 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:50.312720 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:50.329249 systemd-logind[1436]: New session 5 of user core. Jan 17 00:39:50.343848 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:39:50.442758 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:50.474663 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:46424.service: Deactivated successfully. Jan 17 00:39:50.493200 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:39:50.495310 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:39:50.517495 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). Jan 17 00:39:50.520263 systemd-logind[1436]: Removed session 5. Jan 17 00:39:50.623529 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:50.625196 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:50.642615 systemd-logind[1436]: New session 6 of user core. Jan 17 00:39:50.659803 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:39:50.788662 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:50.828150 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:46432.service: Deactivated successfully. Jan 17 00:39:50.832120 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:39:50.837985 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:39:50.858081 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:46438.service - OpenSSH per-connection server daemon (10.0.0.1:46438). Jan 17 00:39:50.865194 systemd-logind[1436]: Removed session 6. Jan 17 00:39:50.939973 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 46438 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:50.946461 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:50.961256 systemd-logind[1436]: New session 7 of user core. Jan 17 00:39:50.974832 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:39:51.108687 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:39:51.109331 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:51.153576 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:51.163901 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:51.192697 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:46438.service: Deactivated successfully. Jan 17 00:39:51.200956 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:39:51.209265 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:39:51.236811 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Jan 17 00:39:51.243668 systemd-logind[1436]: Removed session 7. Jan 17 00:39:51.319809 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:51.326134 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:51.342524 systemd-logind[1436]: New session 8 of user core. Jan 17 00:39:51.357804 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:39:51.437993 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:39:51.440319 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:51.452174 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:51.463881 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:39:51.466224 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:51.518749 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:51.533154 auditctl[1610]: No rules Jan 17 00:39:51.536043 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:39:51.536492 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:51.540431 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:39:51.635091 augenrules[1628]: No rules Jan 17 00:39:51.637679 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:39:51.639852 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:51.648235 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:51.663880 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:46442.service: Deactivated successfully. Jan 17 00:39:51.671525 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:39:51.674684 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:39:51.685225 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:46454.service - OpenSSH per-connection server daemon (10.0.0.1:46454). Jan 17 00:39:51.696660 systemd-logind[1436]: Removed session 8. Jan 17 00:39:51.753698 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 46454 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:39:51.755898 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:39:51.777819 systemd-logind[1436]: New session 9 of user core. Jan 17 00:39:51.788059 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:39:51.895070 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:39:51.895850 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:39:51.968825 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:39:52.029496 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:39:52.029817 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:39:54.441600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:39:54.466178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:54.973558 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:39:54.974221 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:39:54.975724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:55.043952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:55.174223 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit session-9.scope)... Jan 17 00:39:55.174755 systemd[1]: Reloading... Jan 17 00:39:55.376638 zram_generator::config[1735]: No configuration found. Jan 17 00:39:55.723935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:55.870306 systemd[1]: Reloading finished in 693 ms. Jan 17 00:39:56.088235 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:39:56.091650 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:39:56.092205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:56.106844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:56.550984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:56.612721 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:39:57.099219 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:39:57.099219 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:39:57.099219 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:39:57.099219 kubelet[1775]: I0117 00:39:57.097713 1775 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:39:58.369547 kubelet[1775]: I0117 00:39:58.369344 1775 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:39:58.369547 kubelet[1775]: I0117 00:39:58.369478 1775 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:39:58.378522 kubelet[1775]: I0117 00:39:58.377205 1775 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:39:58.590186 kubelet[1775]: I0117 00:39:58.589851 1775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:39:58.628087 kubelet[1775]: E0117 00:39:58.624780 1775 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:39:58.628087 kubelet[1775]: I0117 00:39:58.624880 1775 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:39:58.658319 kubelet[1775]: I0117 00:39:58.657114 1775 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:39:58.658319 kubelet[1775]: I0117 00:39:58.658067 1775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:39:58.659541 kubelet[1775]: I0117 00:39:58.658978 1775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:39:58.659541 kubelet[1775]: I0117 00:39:58.659244 1775 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:39:58.659541 kubelet[1775]: I0117 00:39:58.659262 1775 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:39:58.660771 kubelet[1775]: I0117 00:39:58.660187 1775 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:39:58.675067 kubelet[1775]: I0117 00:39:58.674928 1775 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:39:58.675067 kubelet[1775]: I0117 00:39:58.674985 1775 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:39:58.675067 kubelet[1775]: I0117 00:39:58.675029 1775 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:39:58.675067 kubelet[1775]: I0117 00:39:58.675057 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:39:58.684209 kubelet[1775]: E0117 00:39:58.683821 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:39:58.684209 kubelet[1775]: E0117 00:39:58.683927 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:39:58.696336 kubelet[1775]: I0117 00:39:58.694961 1775 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:39:58.696336 kubelet[1775]: I0117 00:39:58.695646 1775 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:39:58.703863 kubelet[1775]: W0117 00:39:58.702632 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:39:58.830142 kubelet[1775]: E0117 00:39:58.829508 1775 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.113\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:39:58.830142 kubelet[1775]: E0117 00:39:58.829859 1775 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:39:58.835681 kubelet[1775]: I0117 00:39:58.835584 1775 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:39:58.835797 kubelet[1775]: I0117 00:39:58.835762 1775 server.go:1289] "Started kubelet" Jan 17 00:39:58.836260 kubelet[1775]: I0117 00:39:58.836171 1775 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:39:58.837750 kubelet[1775]: I0117 00:39:58.837643 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:39:58.850671 kubelet[1775]: I0117 00:39:58.848876 1775 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:39:58.850671 kubelet[1775]: I0117 00:39:58.850325 1775 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:39:58.852556 kubelet[1775]: I0117 00:39:58.852161 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:39:58.855261 kubelet[1775]: I0117 00:39:58.854790 1775 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:39:58.860224 kubelet[1775]: I0117 00:39:58.859965 1775 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:39:58.862013 kubelet[1775]: E0117 00:39:58.860481 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:58.862013 kubelet[1775]: I0117 00:39:58.860924 1775 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:39:58.862013 kubelet[1775]: I0117 00:39:58.861055 1775 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:39:58.866429 kubelet[1775]: I0117 00:39:58.866406 1775 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:39:58.867691 kubelet[1775]: I0117 00:39:58.867664 1775 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:39:58.880226 kubelet[1775]: E0117 00:39:58.878434 1775 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:39:58.897902 kubelet[1775]: I0117 00:39:58.880585 1775 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:39:58.902174 kubelet[1775]: E0117 00:39:58.880735 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.113\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 00:39:58.917398 kubelet[1775]: E0117 00:39:58.915809 1775 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:39:58.931706 kubelet[1775]: I0117 00:39:58.931562 1775 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:39:58.931706 kubelet[1775]: I0117 00:39:58.931649 1775 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:39:58.931706 kubelet[1775]: I0117 00:39:58.931666 1775 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:39:58.963269 kubelet[1775]: E0117 00:39:58.963195 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.000458 kubelet[1775]: I0117 00:39:59.000395 1775 policy_none.go:49] "None policy: Start" Jan 17 00:39:59.000458 kubelet[1775]: I0117 00:39:59.000451 1775 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:39:59.000458 kubelet[1775]: I0117 00:39:59.000470 1775 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:39:59.019058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:39:59.067860 kubelet[1775]: E0117 00:39:59.064804 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.164947 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:39:59.166656 kubelet[1775]: E0117 00:39:59.166481 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.258749 kubelet[1775]: E0117 00:39:59.258648 1775 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.113\" not found" node="10.0.0.113" Jan 17 00:39:59.267936 kubelet[1775]: E0117 00:39:59.267526 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.278918 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:39:59.301329 kubelet[1775]: E0117 00:39:59.300992 1775 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:39:59.301329 kubelet[1775]: I0117 00:39:59.301314 1775 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:39:59.301831 kubelet[1775]: I0117 00:39:59.301344 1775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:39:59.303054 kubelet[1775]: I0117 00:39:59.302592 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:39:59.307855 kubelet[1775]: E0117 00:39:59.307778 1775 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:39:59.307855 kubelet[1775]: E0117 00:39:59.307843 1775 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.113\" not found" Jan 17 00:39:59.363848 kubelet[1775]: I0117 00:39:59.363262 1775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:39:59.368275 kubelet[1775]: I0117 00:39:59.367706 1775 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:39:59.368275 kubelet[1775]: I0117 00:39:59.367765 1775 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:39:59.368275 kubelet[1775]: I0117 00:39:59.367818 1775 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:39:59.368275 kubelet[1775]: I0117 00:39:59.367828 1775 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:39:59.368275 kubelet[1775]: E0117 00:39:59.367963 1775 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 00:39:59.394154 kubelet[1775]: I0117 00:39:59.393704 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 00:39:59.406428 kubelet[1775]: I0117 00:39:59.404980 1775 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.113" Jan 17 00:39:59.524413 kubelet[1775]: I0117 00:39:59.523793 1775 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.113" Jan 17 00:39:59.524413 kubelet[1775]: E0117 00:39:59.523852 1775 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.113\": node \"10.0.0.113\" not found" Jan 17 00:39:59.636068 kubelet[1775]: E0117 00:39:59.635876 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.687125 kubelet[1775]: E0117 00:39:59.685642 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:39:59.737997 kubelet[1775]: E0117 00:39:59.737205 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.838261 kubelet[1775]: E0117 00:39:59.837883 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.882879 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 17 00:39:59.891192 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 17 00:39:59.915994 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:46454.service: Deactivated successfully. Jan 17 00:39:59.925960 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:39:59.926301 systemd[1]: session-9.scope: Consumed 2.118s CPU time, 82.5M memory peak, 0B memory swap peak. Jan 17 00:39:59.934485 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:39:59.940935 kubelet[1775]: E0117 00:39:59.940854 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:39:59.946946 systemd-logind[1436]: Removed session 9. Jan 17 00:40:00.043552 kubelet[1775]: E0117 00:40:00.042940 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:40:00.143794 kubelet[1775]: E0117 00:40:00.143460 1775 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 17 00:40:00.262173 kubelet[1775]: I0117 00:40:00.258524 1775 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 00:40:00.262173 kubelet[1775]: I0117 00:40:00.259629 1775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 00:40:00.267071 containerd[1450]: time="2026-01-17T00:40:00.259419513Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:40:00.687878 kubelet[1775]: E0117 00:40:00.687322 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:00.687878 kubelet[1775]: I0117 00:40:00.687793 1775 apiserver.go:52] "Watching apiserver" Jan 17 00:40:00.743703 kubelet[1775]: E0117 00:40:00.742553 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:00.800206 systemd[1]: Created slice kubepods-besteffort-pode29f4afe_a1ad_4288_a8f3_a8f033600551.slice - libcontainer container kubepods-besteffort-pode29f4afe_a1ad_4288_a8f3_a8f033600551.slice. Jan 17 00:40:00.806789 kubelet[1775]: I0117 00:40:00.806736 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d6ae2bf-19ec-4641-84d8-c96ea5455451-kubelet-dir\") pod \"csi-node-driver-t94cc\" (UID: \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\") " pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:00.806887 kubelet[1775]: I0117 00:40:00.806801 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e29f4afe-a1ad-4288-a8f3-a8f033600551-kube-proxy\") pod \"kube-proxy-hcjdc\" (UID: \"e29f4afe-a1ad-4288-a8f3-a8f033600551\") " pod="kube-system/kube-proxy-hcjdc" Jan 17 00:40:00.806887 kubelet[1775]: I0117 00:40:00.806831 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29f4afe-a1ad-4288-a8f3-a8f033600551-xtables-lock\") pod \"kube-proxy-hcjdc\" (UID: \"e29f4afe-a1ad-4288-a8f3-a8f033600551\") " pod="kube-system/kube-proxy-hcjdc" Jan 17 00:40:00.806887 kubelet[1775]: I0117 00:40:00.806859 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-lib-modules\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.806887 kubelet[1775]: I0117 00:40:00.806883 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-node-certs\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807059 kubelet[1775]: I0117 00:40:00.806907 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-policysync\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807059 kubelet[1775]: I0117 00:40:00.806935 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-tigera-ca-bundle\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807059 kubelet[1775]: I0117 00:40:00.806965 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-xtables-lock\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807059 kubelet[1775]: I0117 00:40:00.806999 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d6ae2bf-19ec-4641-84d8-c96ea5455451-varrun\") pod \"csi-node-driver-t94cc\" (UID: \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\") " pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:00.807059 kubelet[1775]: I0117 00:40:00.807031 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rjf\" (UniqueName: \"kubernetes.io/projected/e29f4afe-a1ad-4288-a8f3-a8f033600551-kube-api-access-c6rjf\") pod \"kube-proxy-hcjdc\" (UID: \"e29f4afe-a1ad-4288-a8f3-a8f033600551\") " pod="kube-system/kube-proxy-hcjdc" Jan 17 00:40:00.807250 kubelet[1775]: I0117 00:40:00.807059 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-cni-bin-dir\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807250 kubelet[1775]: I0117 00:40:00.807083 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-cni-log-dir\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807250 kubelet[1775]: I0117 00:40:00.807112 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-var-lib-calico\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807250 kubelet[1775]: I0117 00:40:00.807139 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d6ae2bf-19ec-4641-84d8-c96ea5455451-socket-dir\") pod \"csi-node-driver-t94cc\" (UID: \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\") " pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:00.807250 kubelet[1775]: I0117 00:40:00.807168 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64sf\" (UniqueName: \"kubernetes.io/projected/8d6ae2bf-19ec-4641-84d8-c96ea5455451-kube-api-access-v64sf\") pod \"csi-node-driver-t94cc\" (UID: \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\") " pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:00.807649 kubelet[1775]: I0117 00:40:00.807198 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-cni-net-dir\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807649 kubelet[1775]: I0117 00:40:00.807221 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-var-run-calico\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807649 kubelet[1775]: I0117 00:40:00.807251 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d6ae2bf-19ec-4641-84d8-c96ea5455451-registration-dir\") pod \"csi-node-driver-t94cc\" (UID: \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\") " pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:00.807649 kubelet[1775]: I0117 00:40:00.807281 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29f4afe-a1ad-4288-a8f3-a8f033600551-lib-modules\") pod \"kube-proxy-hcjdc\" (UID: \"e29f4afe-a1ad-4288-a8f3-a8f033600551\") " pod="kube-system/kube-proxy-hcjdc" Jan 17 00:40:00.807649 kubelet[1775]: I0117 00:40:00.807314 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-flexvol-driver-host\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.807857 kubelet[1775]: I0117 00:40:00.807346 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4vd\" (UniqueName: \"kubernetes.io/projected/c2a70fcf-b0df-4588-99ec-0e31dc9a6440-kube-api-access-rr4vd\") pod \"calico-node-n9ksp\" (UID: \"c2a70fcf-b0df-4588-99ec-0e31dc9a6440\") " pod="calico-system/calico-node-n9ksp" Jan 17 00:40:00.862646 kubelet[1775]: I0117 00:40:00.862555 1775 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:40:00.863916 systemd[1]: Created slice kubepods-besteffort-podc2a70fcf_b0df_4588_99ec_0e31dc9a6440.slice - libcontainer container kubepods-besteffort-podc2a70fcf_b0df_4588_99ec_0e31dc9a6440.slice. Jan 17 00:40:00.911758 kubelet[1775]: E0117 00:40:00.911574 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.912942 kubelet[1775]: W0117 00:40:00.911598 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.912942 kubelet[1775]: E0117 00:40:00.912206 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.912942 kubelet[1775]: E0117 00:40:00.912596 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.912942 kubelet[1775]: W0117 00:40:00.912609 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.912942 kubelet[1775]: E0117 00:40:00.912623 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.916023 kubelet[1775]: E0117 00:40:00.915850 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.916023 kubelet[1775]: W0117 00:40:00.915867 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.916023 kubelet[1775]: E0117 00:40:00.915883 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.920256 kubelet[1775]: E0117 00:40:00.920125 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.920256 kubelet[1775]: W0117 00:40:00.920140 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.920256 kubelet[1775]: E0117 00:40:00.920154 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.920597 kubelet[1775]: E0117 00:40:00.920584 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.923585 kubelet[1775]: W0117 00:40:00.923547 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.923701 kubelet[1775]: E0117 00:40:00.923584 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.924124 kubelet[1775]: E0117 00:40:00.923976 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.924124 kubelet[1775]: W0117 00:40:00.923995 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.924124 kubelet[1775]: E0117 00:40:00.924012 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.924551 kubelet[1775]: E0117 00:40:00.924531 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.924779 kubelet[1775]: W0117 00:40:00.924615 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.924779 kubelet[1775]: E0117 00:40:00.924635 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.925024 kubelet[1775]: E0117 00:40:00.925009 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.925088 kubelet[1775]: W0117 00:40:00.925076 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.925217 kubelet[1775]: E0117 00:40:00.925201 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.925574 kubelet[1775]: E0117 00:40:00.925555 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.925800 kubelet[1775]: W0117 00:40:00.925638 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.925800 kubelet[1775]: E0117 00:40:00.925659 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.926087 kubelet[1775]: E0117 00:40:00.926069 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.926166 kubelet[1775]: W0117 00:40:00.926148 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.926247 kubelet[1775]: E0117 00:40:00.926231 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.926823 kubelet[1775]: E0117 00:40:00.926589 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.926823 kubelet[1775]: W0117 00:40:00.926603 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.926823 kubelet[1775]: E0117 00:40:00.926616 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.929763 kubelet[1775]: E0117 00:40:00.928475 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.929763 kubelet[1775]: W0117 00:40:00.928494 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.929763 kubelet[1775]: E0117 00:40:00.928509 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.930098 kubelet[1775]: E0117 00:40:00.930081 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.930163 kubelet[1775]: W0117 00:40:00.930150 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.930226 kubelet[1775]: E0117 00:40:00.930214 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.930576 kubelet[1775]: E0117 00:40:00.930561 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.930640 kubelet[1775]: W0117 00:40:00.930628 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.930755 kubelet[1775]: E0117 00:40:00.930735 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.931046 kubelet[1775]: E0117 00:40:00.931034 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.931107 kubelet[1775]: W0117 00:40:00.931097 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.931156 kubelet[1775]: E0117 00:40:00.931146 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.931521 kubelet[1775]: E0117 00:40:00.931503 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.931602 kubelet[1775]: W0117 00:40:00.931585 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.937112 kubelet[1775]: E0117 00:40:00.937033 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.937829 kubelet[1775]: E0117 00:40:00.937785 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.937829 kubelet[1775]: W0117 00:40:00.937820 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.938841 kubelet[1775]: E0117 00:40:00.937836 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.938841 kubelet[1775]: E0117 00:40:00.938626 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.938841 kubelet[1775]: W0117 00:40:00.938639 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.938841 kubelet[1775]: E0117 00:40:00.938659 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.940112 kubelet[1775]: E0117 00:40:00.939930 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.940112 kubelet[1775]: W0117 00:40:00.939946 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.940112 kubelet[1775]: E0117 00:40:00.939962 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.959604 kubelet[1775]: E0117 00:40:00.956864 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.959604 kubelet[1775]: W0117 00:40:00.956892 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.959604 kubelet[1775]: E0117 00:40:00.956921 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.963755 kubelet[1775]: E0117 00:40:00.963669 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.963755 kubelet[1775]: W0117 00:40:00.963745 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.963843 kubelet[1775]: E0117 00:40:00.963776 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.964794 kubelet[1775]: E0117 00:40:00.964711 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.964794 kubelet[1775]: W0117 00:40:00.964786 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.964895 kubelet[1775]: E0117 00:40:00.964803 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.965952 kubelet[1775]: E0117 00:40:00.965798 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.965952 kubelet[1775]: W0117 00:40:00.965818 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.965952 kubelet[1775]: E0117 00:40:00.965835 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.966975 kubelet[1775]: E0117 00:40:00.966660 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.966975 kubelet[1775]: W0117 00:40:00.966680 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.968251 kubelet[1775]: E0117 00:40:00.966698 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.968251 kubelet[1775]: E0117 00:40:00.967876 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.968251 kubelet[1775]: W0117 00:40:00.967888 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.968251 kubelet[1775]: E0117 00:40:00.967903 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.970011 kubelet[1775]: E0117 00:40:00.968922 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.970011 kubelet[1775]: W0117 00:40:00.969295 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.978538 kubelet[1775]: E0117 00:40:00.970528 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.978538 kubelet[1775]: E0117 00:40:00.972220 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.978538 kubelet[1775]: W0117 00:40:00.972235 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.978538 kubelet[1775]: E0117 00:40:00.972251 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.984850 kubelet[1775]: E0117 00:40:00.981895 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.984850 kubelet[1775]: W0117 00:40:00.982299 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.984850 kubelet[1775]: E0117 00:40:00.982585 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.988820 kubelet[1775]: E0117 00:40:00.988507 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.988898 kubelet[1775]: W0117 00:40:00.988827 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.988898 kubelet[1775]: E0117 00:40:00.988852 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.989806 kubelet[1775]: E0117 00:40:00.989715 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.989806 kubelet[1775]: W0117 00:40:00.989786 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.990139 kubelet[1775]: E0117 00:40:00.989951 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.990726 kubelet[1775]: E0117 00:40:00.990673 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.990726 kubelet[1775]: W0117 00:40:00.990709 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.990726 kubelet[1775]: E0117 00:40:00.990727 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.991315 kubelet[1775]: E0117 00:40:00.991273 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.991315 kubelet[1775]: W0117 00:40:00.991304 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.991896 kubelet[1775]: E0117 00:40:00.991320 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.991896 kubelet[1775]: E0117 00:40:00.991790 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.991896 kubelet[1775]: W0117 00:40:00.991804 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.991896 kubelet[1775]: E0117 00:40:00.991818 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.992509 kubelet[1775]: E0117 00:40:00.992320 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.992566 kubelet[1775]: W0117 00:40:00.992536 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.992566 kubelet[1775]: E0117 00:40:00.992551 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.993167 kubelet[1775]: E0117 00:40:00.993098 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.993167 kubelet[1775]: W0117 00:40:00.993130 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.993167 kubelet[1775]: E0117 00:40:00.993145 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:00.994017 kubelet[1775]: E0117 00:40:00.993952 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:00.994017 kubelet[1775]: W0117 00:40:00.993988 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:00.994017 kubelet[1775]: E0117 00:40:00.994000 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.003120 kubelet[1775]: E0117 00:40:01.002569 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.003120 kubelet[1775]: W0117 00:40:01.002996 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.005690 kubelet[1775]: E0117 00:40:01.003316 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.005690 kubelet[1775]: E0117 00:40:01.005187 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.005690 kubelet[1775]: W0117 00:40:01.005199 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.005690 kubelet[1775]: E0117 00:40:01.005215 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.005690 kubelet[1775]: E0117 00:40:01.005602 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.005690 kubelet[1775]: W0117 00:40:01.005690 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.006158 kubelet[1775]: E0117 00:40:01.005706 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.006158 kubelet[1775]: E0117 00:40:01.006030 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.006158 kubelet[1775]: W0117 00:40:01.006040 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.006158 kubelet[1775]: E0117 00:40:01.006056 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.007020 kubelet[1775]: E0117 00:40:01.006969 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.007020 kubelet[1775]: W0117 00:40:01.007005 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.007133 kubelet[1775]: E0117 00:40:01.007022 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.008176 kubelet[1775]: E0117 00:40:01.008144 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.008176 kubelet[1775]: W0117 00:40:01.008172 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.008258 kubelet[1775]: E0117 00:40:01.008186 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.009060 kubelet[1775]: E0117 00:40:01.009011 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.009060 kubelet[1775]: W0117 00:40:01.009045 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.009166 kubelet[1775]: E0117 00:40:01.009063 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.010073 kubelet[1775]: E0117 00:40:01.010024 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.010073 kubelet[1775]: W0117 00:40:01.010059 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.010171 kubelet[1775]: E0117 00:40:01.010075 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.011672 kubelet[1775]: E0117 00:40:01.011616 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.011672 kubelet[1775]: W0117 00:40:01.011650 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.011672 kubelet[1775]: E0117 00:40:01.011668 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.012181 kubelet[1775]: E0117 00:40:01.012114 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.012234 kubelet[1775]: W0117 00:40:01.012186 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.012234 kubelet[1775]: E0117 00:40:01.012203 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.026911 kubelet[1775]: E0117 00:40:01.013645 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.043686 kubelet[1775]: W0117 00:40:01.037218 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.055734 kubelet[1775]: E0117 00:40:01.049970 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.057617 kubelet[1775]: E0117 00:40:01.056731 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.057617 kubelet[1775]: W0117 00:40:01.056751 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.057617 kubelet[1775]: E0117 00:40:01.056774 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.060466 kubelet[1775]: E0117 00:40:01.059260 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.060466 kubelet[1775]: W0117 00:40:01.059300 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.060466 kubelet[1775]: E0117 00:40:01.059321 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.060466 kubelet[1775]: E0117 00:40:01.059770 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.060466 kubelet[1775]: W0117 00:40:01.059784 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.060466 kubelet[1775]: E0117 00:40:01.059796 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.061624 kubelet[1775]: E0117 00:40:01.061276 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.061624 kubelet[1775]: W0117 00:40:01.061308 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.061624 kubelet[1775]: E0117 00:40:01.061322 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.062110 kubelet[1775]: E0117 00:40:01.062058 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.062110 kubelet[1775]: W0117 00:40:01.062086 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.062110 kubelet[1775]: E0117 00:40:01.062104 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.064022 kubelet[1775]: E0117 00:40:01.063470 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.064022 kubelet[1775]: W0117 00:40:01.063503 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.064022 kubelet[1775]: E0117 00:40:01.063521 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.064022 kubelet[1775]: E0117 00:40:01.063768 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.064022 kubelet[1775]: W0117 00:40:01.063779 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.064022 kubelet[1775]: E0117 00:40:01.063790 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.066407 kubelet[1775]: E0117 00:40:01.065056 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.066407 kubelet[1775]: W0117 00:40:01.065614 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.067193 kubelet[1775]: E0117 00:40:01.066877 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.071280 kubelet[1775]: E0117 00:40:01.070478 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.071280 kubelet[1775]: W0117 00:40:01.070897 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.074134 kubelet[1775]: E0117 00:40:01.072475 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.074512 kubelet[1775]: E0117 00:40:01.074492 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.074964 kubelet[1775]: W0117 00:40:01.074631 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.074964 kubelet[1775]: E0117 00:40:01.074654 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.077084 kubelet[1775]: E0117 00:40:01.076964 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.077084 kubelet[1775]: W0117 00:40:01.076999 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.077084 kubelet[1775]: E0117 00:40:01.077018 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.077472 kubelet[1775]: E0117 00:40:01.077440 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.077472 kubelet[1775]: W0117 00:40:01.077452 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.077472 kubelet[1775]: E0117 00:40:01.077464 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.077921 kubelet[1775]: E0117 00:40:01.077780 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.077921 kubelet[1775]: W0117 00:40:01.077795 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.077921 kubelet[1775]: E0117 00:40:01.077810 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.079807 kubelet[1775]: E0117 00:40:01.079338 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.079807 kubelet[1775]: W0117 00:40:01.079397 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.079807 kubelet[1775]: E0117 00:40:01.079413 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.082508 kubelet[1775]: E0117 00:40:01.081484 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.082508 kubelet[1775]: W0117 00:40:01.081498 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.082508 kubelet[1775]: E0117 00:40:01.081511 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.083262 kubelet[1775]: E0117 00:40:01.082956 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.083262 kubelet[1775]: W0117 00:40:01.082972 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.083262 kubelet[1775]: E0117 00:40:01.082986 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.084277 kubelet[1775]: E0117 00:40:01.084157 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.084277 kubelet[1775]: W0117 00:40:01.084191 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.084277 kubelet[1775]: E0117 00:40:01.084208 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.085558 kubelet[1775]: E0117 00:40:01.084815 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.085558 kubelet[1775]: W0117 00:40:01.084847 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.085558 kubelet[1775]: E0117 00:40:01.084912 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.086080 kubelet[1775]: E0117 00:40:01.086049 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.086080 kubelet[1775]: W0117 00:40:01.086075 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.086160 kubelet[1775]: E0117 00:40:01.086091 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.087529 kubelet[1775]: E0117 00:40:01.087498 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.087529 kubelet[1775]: W0117 00:40:01.087526 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.087672 kubelet[1775]: E0117 00:40:01.087540 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.089957 kubelet[1775]: E0117 00:40:01.088123 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.089957 kubelet[1775]: W0117 00:40:01.089121 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.089957 kubelet[1775]: E0117 00:40:01.089954 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.090559 kubelet[1775]: E0117 00:40:01.090525 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.090559 kubelet[1775]: W0117 00:40:01.090554 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.090653 kubelet[1775]: E0117 00:40:01.090571 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.090967 kubelet[1775]: E0117 00:40:01.090915 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.090967 kubelet[1775]: W0117 00:40:01.090950 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.090967 kubelet[1775]: E0117 00:40:01.090967 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.091678 kubelet[1775]: E0117 00:40:01.091630 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.091678 kubelet[1775]: W0117 00:40:01.091662 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.092080 kubelet[1775]: E0117 00:40:01.091680 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.092080 kubelet[1775]: E0117 00:40:01.092068 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.092080 kubelet[1775]: W0117 00:40:01.092079 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.092179 kubelet[1775]: E0117 00:40:01.092092 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.092863 kubelet[1775]: E0117 00:40:01.092813 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.092863 kubelet[1775]: W0117 00:40:01.092841 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.093261 kubelet[1775]: E0117 00:40:01.092857 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.093314 kubelet[1775]: E0117 00:40:01.093271 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.093314 kubelet[1775]: W0117 00:40:01.093282 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.093314 kubelet[1775]: E0117 00:40:01.093295 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.093811 kubelet[1775]: E0117 00:40:01.093658 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.093811 kubelet[1775]: W0117 00:40:01.093670 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.093811 kubelet[1775]: E0117 00:40:01.093683 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.094149 kubelet[1775]: E0117 00:40:01.094066 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.094149 kubelet[1775]: W0117 00:40:01.094078 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.094149 kubelet[1775]: E0117 00:40:01.094090 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.095009 kubelet[1775]: E0117 00:40:01.094659 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.095009 kubelet[1775]: W0117 00:40:01.094672 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.095009 kubelet[1775]: E0117 00:40:01.094683 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.096407 kubelet[1775]: E0117 00:40:01.096304 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.096407 kubelet[1775]: W0117 00:40:01.096333 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.096407 kubelet[1775]: E0117 00:40:01.096350 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.096843 kubelet[1775]: E0117 00:40:01.096802 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.096843 kubelet[1775]: W0117 00:40:01.096833 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.097236 kubelet[1775]: E0117 00:40:01.096848 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.097326 kubelet[1775]: E0117 00:40:01.097281 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.097326 kubelet[1775]: W0117 00:40:01.097317 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.097474 kubelet[1775]: E0117 00:40:01.097333 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.098277 kubelet[1775]: E0117 00:40:01.098202 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.098277 kubelet[1775]: W0117 00:40:01.098230 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.098277 kubelet[1775]: E0117 00:40:01.098245 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.099092 kubelet[1775]: E0117 00:40:01.099020 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.099092 kubelet[1775]: W0117 00:40:01.099048 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.099092 kubelet[1775]: E0117 00:40:01.099065 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.100203 kubelet[1775]: E0117 00:40:01.100118 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.100203 kubelet[1775]: W0117 00:40:01.100193 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.100451 kubelet[1775]: E0117 00:40:01.100209 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.101158 kubelet[1775]: E0117 00:40:01.101093 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.101158 kubelet[1775]: W0117 00:40:01.101126 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.101158 kubelet[1775]: E0117 00:40:01.101141 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.101915 kubelet[1775]: E0117 00:40:01.101823 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.101915 kubelet[1775]: W0117 00:40:01.101858 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.101915 kubelet[1775]: E0117 00:40:01.101874 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.102861 kubelet[1775]: E0117 00:40:01.102802 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.102861 kubelet[1775]: W0117 00:40:01.102835 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.102861 kubelet[1775]: E0117 00:40:01.102851 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.103733 kubelet[1775]: E0117 00:40:01.103677 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.103733 kubelet[1775]: W0117 00:40:01.103704 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.103733 kubelet[1775]: E0117 00:40:01.103723 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.104408 kubelet[1775]: E0117 00:40:01.104318 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.104408 kubelet[1775]: W0117 00:40:01.104394 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.104502 kubelet[1775]: E0117 00:40:01.104412 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.104790 kubelet[1775]: E0117 00:40:01.104733 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.104790 kubelet[1775]: W0117 00:40:01.104764 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.104790 kubelet[1775]: E0117 00:40:01.104780 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.105856 kubelet[1775]: E0117 00:40:01.105800 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.105856 kubelet[1775]: W0117 00:40:01.105832 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.105856 kubelet[1775]: E0117 00:40:01.105849 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.106499 kubelet[1775]: E0117 00:40:01.106447 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.106499 kubelet[1775]: W0117 00:40:01.106474 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.106499 kubelet[1775]: E0117 00:40:01.106490 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.108501 kubelet[1775]: E0117 00:40:01.108203 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.108501 kubelet[1775]: W0117 00:40:01.108236 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.108501 kubelet[1775]: E0117 00:40:01.108251 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.110446 kubelet[1775]: E0117 00:40:01.110199 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.110446 kubelet[1775]: W0117 00:40:01.110215 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.110446 kubelet[1775]: E0117 00:40:01.110230 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.111963 kubelet[1775]: E0117 00:40:01.111437 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.111963 kubelet[1775]: W0117 00:40:01.111922 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.112064 kubelet[1775]: E0117 00:40:01.111968 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.112662 kubelet[1775]: E0117 00:40:01.112600 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.112662 kubelet[1775]: W0117 00:40:01.112632 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.112662 kubelet[1775]: E0117 00:40:01.112650 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.114030 kubelet[1775]: E0117 00:40:01.113972 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.114030 kubelet[1775]: W0117 00:40:01.114003 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.114030 kubelet[1775]: E0117 00:40:01.114019 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.114781 kubelet[1775]: E0117 00:40:01.114727 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.114781 kubelet[1775]: W0117 00:40:01.114757 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.114781 kubelet[1775]: E0117 00:40:01.114773 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.115228 kubelet[1775]: E0117 00:40:01.115193 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.115228 kubelet[1775]: W0117 00:40:01.115208 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.115228 kubelet[1775]: E0117 00:40:01.115222 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.116245 kubelet[1775]: E0117 00:40:01.116174 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.116245 kubelet[1775]: W0117 00:40:01.116204 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.116245 kubelet[1775]: E0117 00:40:01.116221 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.116735 kubelet[1775]: E0117 00:40:01.116637 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.116735 kubelet[1775]: W0117 00:40:01.116648 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.116735 kubelet[1775]: E0117 00:40:01.116661 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.117042 kubelet[1775]: E0117 00:40:01.116998 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.117042 kubelet[1775]: W0117 00:40:01.117014 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.117042 kubelet[1775]: E0117 00:40:01.117042 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.117682 kubelet[1775]: E0117 00:40:01.117610 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.117682 kubelet[1775]: W0117 00:40:01.117642 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.117682 kubelet[1775]: E0117 00:40:01.117657 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.119301 kubelet[1775]: E0117 00:40:01.119074 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.119301 kubelet[1775]: W0117 00:40:01.119104 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.119301 kubelet[1775]: E0117 00:40:01.119120 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.119847 kubelet[1775]: E0117 00:40:01.119641 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.119847 kubelet[1775]: W0117 00:40:01.119656 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.119847 kubelet[1775]: E0117 00:40:01.119670 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.120732 kubelet[1775]: E0117 00:40:01.120117 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.120732 kubelet[1775]: W0117 00:40:01.120281 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.120939 kubelet[1775]: E0117 00:40:01.120883 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122032 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.123226 kubelet[1775]: W0117 00:40:01.122063 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122077 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122329 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.123226 kubelet[1775]: W0117 00:40:01.122340 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122410 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122676 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.123226 kubelet[1775]: W0117 00:40:01.122689 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.122704 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.123226 kubelet[1775]: E0117 00:40:01.123002 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.123625 kubelet[1775]: W0117 00:40:01.123014 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.123625 kubelet[1775]: E0117 00:40:01.123028 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.125065 kubelet[1775]: E0117 00:40:01.124806 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.125065 kubelet[1775]: W0117 00:40:01.124822 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.125065 kubelet[1775]: E0117 00:40:01.124834 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.126157 kubelet[1775]: E0117 00:40:01.125890 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.126157 kubelet[1775]: W0117 00:40:01.125960 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.126157 kubelet[1775]: E0117 00:40:01.125975 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.127280 kubelet[1775]: E0117 00:40:01.126859 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.127280 kubelet[1775]: W0117 00:40:01.126889 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.127280 kubelet[1775]: E0117 00:40:01.126903 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.127495 kubelet[1775]: E0117 00:40:01.127479 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.127535 kubelet[1775]: W0117 00:40:01.127496 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.127535 kubelet[1775]: E0117 00:40:01.127513 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.128099 kubelet[1775]: E0117 00:40:01.128023 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.128099 kubelet[1775]: W0117 00:40:01.128058 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.128099 kubelet[1775]: E0117 00:40:01.128074 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.129576 kubelet[1775]: E0117 00:40:01.129101 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.129576 kubelet[1775]: W0117 00:40:01.129130 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.129576 kubelet[1775]: E0117 00:40:01.129145 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.134897 kubelet[1775]: E0117 00:40:01.134090 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.134897 kubelet[1775]: W0117 00:40:01.134114 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.134897 kubelet[1775]: E0117 00:40:01.134134 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.134897 kubelet[1775]: E0117 00:40:01.134613 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.134897 kubelet[1775]: W0117 00:40:01.134625 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.134897 kubelet[1775]: E0117 00:40:01.134638 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.135189 kubelet[1775]: E0117 00:40:01.134969 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.135189 kubelet[1775]: W0117 00:40:01.134980 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.135189 kubelet[1775]: E0117 00:40:01.134993 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.140478 kubelet[1775]: E0117 00:40:01.140339 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.140478 kubelet[1775]: W0117 00:40:01.140413 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.140478 kubelet[1775]: E0117 00:40:01.140440 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.142017 kubelet[1775]: E0117 00:40:01.141252 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.142017 kubelet[1775]: W0117 00:40:01.141666 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.142556 kubelet[1775]: E0117 00:40:01.142311 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.144490 kubelet[1775]: E0117 00:40:01.143529 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:01.144490 kubelet[1775]: E0117 00:40:01.143687 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.144490 kubelet[1775]: W0117 00:40:01.144046 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.144490 kubelet[1775]: E0117 00:40:01.144063 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.151629 containerd[1450]: time="2026-01-17T00:40:01.150773332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcjdc,Uid:e29f4afe-a1ad-4288-a8f3-a8f033600551,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:01.181289 kubelet[1775]: E0117 00:40:01.180943 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:01.181870 containerd[1450]: time="2026-01-17T00:40:01.181827467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9ksp,Uid:c2a70fcf-b0df-4588-99ec-0e31dc9a6440,Namespace:calico-system,Attempt:0,}" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.202150 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.203252 kubelet[1775]: W0117 00:40:01.202189 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.202211 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.202749 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.203252 kubelet[1775]: W0117 00:40:01.202761 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.202792 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.203206 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.203252 kubelet[1775]: W0117 00:40:01.203216 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.203252 kubelet[1775]: E0117 00:40:01.203228 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.203685 kubelet[1775]: E0117 00:40:01.203645 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.203685 kubelet[1775]: W0117 00:40:01.203656 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.203685 kubelet[1775]: E0117 00:40:01.203669 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.206287 kubelet[1775]: E0117 00:40:01.205537 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.206287 kubelet[1775]: W0117 00:40:01.206029 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.206748 kubelet[1775]: E0117 00:40:01.206517 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.210083 kubelet[1775]: E0117 00:40:01.208155 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.210083 kubelet[1775]: W0117 00:40:01.209223 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.210568 kubelet[1775]: E0117 00:40:01.210415 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.211865 kubelet[1775]: E0117 00:40:01.211536 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.211865 kubelet[1775]: W0117 00:40:01.211708 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.212233 kubelet[1775]: E0117 00:40:01.212042 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.213633 kubelet[1775]: E0117 00:40:01.213487 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.213633 kubelet[1775]: W0117 00:40:01.213518 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.213633 kubelet[1775]: E0117 00:40:01.213533 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.215420 kubelet[1775]: E0117 00:40:01.215176 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.215420 kubelet[1775]: W0117 00:40:01.215193 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.215420 kubelet[1775]: E0117 00:40:01.215253 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.216433 kubelet[1775]: E0117 00:40:01.215716 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.216433 kubelet[1775]: W0117 00:40:01.215731 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.216433 kubelet[1775]: E0117 00:40:01.215744 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.216433 kubelet[1775]: E0117 00:40:01.216187 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.216433 kubelet[1775]: W0117 00:40:01.216199 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.216433 kubelet[1775]: E0117 00:40:01.216211 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.216713 kubelet[1775]: E0117 00:40:01.216693 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.216713 kubelet[1775]: W0117 00:40:01.216705 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.216783 kubelet[1775]: E0117 00:40:01.216720 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.219774 kubelet[1775]: E0117 00:40:01.219489 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.219774 kubelet[1775]: W0117 00:40:01.219505 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.219774 kubelet[1775]: E0117 00:40:01.219519 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.220499 kubelet[1775]: E0117 00:40:01.220218 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.220499 kubelet[1775]: W0117 00:40:01.220230 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.220499 kubelet[1775]: E0117 00:40:01.220242 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.223240 kubelet[1775]: E0117 00:40:01.223191 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.223240 kubelet[1775]: W0117 00:40:01.223229 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.223319 kubelet[1775]: E0117 00:40:01.223246 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.223893 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.224971 kubelet[1775]: W0117 00:40:01.223923 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.223936 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.224520 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.224971 kubelet[1775]: W0117 00:40:01.224532 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.224544 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.224944 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:40:01.224971 kubelet[1775]: W0117 00:40:01.224953 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:40:01.224971 kubelet[1775]: E0117 00:40:01.224962 1775 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:40:01.705918 kubelet[1775]: E0117 00:40:01.705632 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:02.305291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396340642.mount: Deactivated successfully. Jan 17 00:40:02.321739 containerd[1450]: time="2026-01-17T00:40:02.321650502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:02.325680 containerd[1450]: time="2026-01-17T00:40:02.325335946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:40:02.327166 containerd[1450]: time="2026-01-17T00:40:02.327071632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:02.335539 containerd[1450]: time="2026-01-17T00:40:02.334714526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:02.339058 containerd[1450]: time="2026-01-17T00:40:02.338922875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:40:02.342782 containerd[1450]: time="2026-01-17T00:40:02.342624484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:40:02.343820 containerd[1450]: time="2026-01-17T00:40:02.343478749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.161394995s" Jan 17 00:40:02.349408 containerd[1450]: time="2026-01-17T00:40:02.348943320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.197862336s" Jan 17 00:40:02.371397 kubelet[1775]: E0117 00:40:02.370920 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:02.709761 kubelet[1775]: E0117 00:40:02.707905 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:02.752425 containerd[1450]: time="2026-01-17T00:40:02.751499931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:02.752425 containerd[1450]: time="2026-01-17T00:40:02.751570212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:02.752425 containerd[1450]: time="2026-01-17T00:40:02.751589246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:02.752425 containerd[1450]: time="2026-01-17T00:40:02.751740649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:02.805735 containerd[1450]: time="2026-01-17T00:40:02.805081742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:02.805735 containerd[1450]: time="2026-01-17T00:40:02.805161802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:02.805735 containerd[1450]: time="2026-01-17T00:40:02.805181447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:02.805735 containerd[1450]: time="2026-01-17T00:40:02.805502947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:03.100871 systemd[1]: Started cri-containerd-fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328.scope - libcontainer container fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328. Jan 17 00:40:03.162632 systemd[1]: Started cri-containerd-ef6b2f7158dde1808fa4296d0beef3f8808d79af4b38f30851020dc887936fe2.scope - libcontainer container ef6b2f7158dde1808fa4296d0beef3f8808d79af4b38f30851020dc887936fe2. Jan 17 00:40:03.476536 containerd[1450]: time="2026-01-17T00:40:03.476491335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9ksp,Uid:c2a70fcf-b0df-4588-99ec-0e31dc9a6440,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\"" Jan 17 00:40:03.478572 kubelet[1775]: E0117 00:40:03.478541 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:03.483085 containerd[1450]: time="2026-01-17T00:40:03.480819482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcjdc,Uid:e29f4afe-a1ad-4288-a8f3-a8f033600551,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef6b2f7158dde1808fa4296d0beef3f8808d79af4b38f30851020dc887936fe2\"" Jan 17 00:40:03.484581 kubelet[1775]: E0117 00:40:03.484122 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:03.484997 containerd[1450]: time="2026-01-17T00:40:03.484962187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:40:03.709428 kubelet[1775]: E0117 00:40:03.708680 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:04.372176 kubelet[1775]: E0117 00:40:04.368538 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:04.519153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920432776.mount: Deactivated successfully. Jan 17 00:40:04.710441 kubelet[1775]: E0117 00:40:04.709576 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:04.964188 containerd[1450]: time="2026-01-17T00:40:04.962129341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:04.965649 containerd[1450]: time="2026-01-17T00:40:04.964257053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 17 00:40:04.971418 containerd[1450]: time="2026-01-17T00:40:04.969842853Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:04.979999 containerd[1450]: time="2026-01-17T00:40:04.978869893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:04.998061 containerd[1450]: time="2026-01-17T00:40:04.990441454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.504918223s" Jan 17 00:40:05.001628 containerd[1450]: time="2026-01-17T00:40:04.990519047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:40:05.017964 containerd[1450]: time="2026-01-17T00:40:05.016157636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:40:05.051051 containerd[1450]: time="2026-01-17T00:40:05.050785447Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:40:05.138175 containerd[1450]: time="2026-01-17T00:40:05.138057962Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec\"" Jan 17 00:40:05.141345 containerd[1450]: time="2026-01-17T00:40:05.139566281Z" level=info msg="StartContainer for \"c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec\"" Jan 17 00:40:05.362345 systemd[1]: Started cri-containerd-c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec.scope - libcontainer container c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec. Jan 17 00:40:05.479430 containerd[1450]: time="2026-01-17T00:40:05.479295175Z" level=info msg="StartContainer for \"c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec\" returns successfully" Jan 17 00:40:05.569181 systemd[1]: cri-containerd-c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec.scope: Deactivated successfully. Jan 17 00:40:05.640801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec-rootfs.mount: Deactivated successfully. Jan 17 00:40:05.688082 containerd[1450]: time="2026-01-17T00:40:05.687975756Z" level=info msg="shim disconnected" id=c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec namespace=k8s.io Jan 17 00:40:05.688902 containerd[1450]: time="2026-01-17T00:40:05.688423029Z" level=warning msg="cleaning up after shim disconnected" id=c1b91f422c576318262e740517f6dfae8af1efbbe4d0eb9e181fd2173bcd02ec namespace=k8s.io Jan 17 00:40:05.688902 containerd[1450]: time="2026-01-17T00:40:05.688444394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:05.712524 kubelet[1775]: E0117 00:40:05.712465 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:06.384668 kubelet[1775]: E0117 00:40:06.384069 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:06.523990 kubelet[1775]: E0117 00:40:06.523656 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:06.713859 kubelet[1775]: E0117 00:40:06.713814 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:07.715859 kubelet[1775]: E0117 00:40:07.715512 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:07.845321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265025857.mount: Deactivated successfully. Jan 17 00:40:08.369018 kubelet[1775]: E0117 00:40:08.368928 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:08.519475 containerd[1450]: time="2026-01-17T00:40:08.519318613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.520586 containerd[1450]: time="2026-01-17T00:40:08.520454348Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:40:08.523011 containerd[1450]: time="2026-01-17T00:40:08.522929918Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.526089 containerd[1450]: time="2026-01-17T00:40:08.525860451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:08.527619 containerd[1450]: time="2026-01-17T00:40:08.526934879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 3.510701435s" Jan 17 00:40:08.527619 containerd[1450]: time="2026-01-17T00:40:08.526979543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:40:08.532327 containerd[1450]: time="2026-01-17T00:40:08.532029288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:40:08.538951 containerd[1450]: time="2026-01-17T00:40:08.538897443Z" level=info msg="CreateContainer within sandbox \"ef6b2f7158dde1808fa4296d0beef3f8808d79af4b38f30851020dc887936fe2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:40:08.577226 containerd[1450]: time="2026-01-17T00:40:08.577139415Z" level=info msg="CreateContainer within sandbox \"ef6b2f7158dde1808fa4296d0beef3f8808d79af4b38f30851020dc887936fe2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81a519dda6ec357457d2b1e21e226796288e3a170f361757a84d6f373a3269b5\"" Jan 17 00:40:08.578231 containerd[1450]: time="2026-01-17T00:40:08.578093101Z" level=info msg="StartContainer for \"81a519dda6ec357457d2b1e21e226796288e3a170f361757a84d6f373a3269b5\"" Jan 17 00:40:08.637641 systemd[1]: Started cri-containerd-81a519dda6ec357457d2b1e21e226796288e3a170f361757a84d6f373a3269b5.scope - libcontainer container 81a519dda6ec357457d2b1e21e226796288e3a170f361757a84d6f373a3269b5. Jan 17 00:40:08.716082 kubelet[1775]: E0117 00:40:08.715834 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:08.767456 containerd[1450]: time="2026-01-17T00:40:08.767314292Z" level=info msg="StartContainer for \"81a519dda6ec357457d2b1e21e226796288e3a170f361757a84d6f373a3269b5\" returns successfully" Jan 17 00:40:09.543107 kubelet[1775]: E0117 00:40:09.542315 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:09.595237 kubelet[1775]: I0117 00:40:09.595060 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcjdc" podStartSLOduration=5.549515066 podStartE2EDuration="10.594962451s" podCreationTimestamp="2026-01-17 00:39:59 +0000 UTC" firstStartedPulling="2026-01-17 00:40:03.485465395 +0000 UTC m=+6.820204167" lastFinishedPulling="2026-01-17 00:40:08.530912779 +0000 UTC m=+11.865651552" observedRunningTime="2026-01-17 00:40:09.588095526 +0000 UTC m=+12.922834320" watchObservedRunningTime="2026-01-17 00:40:09.594962451 +0000 UTC m=+12.929701225" Jan 17 00:40:09.717869 kubelet[1775]: E0117 00:40:09.717794 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:10.370587 kubelet[1775]: E0117 00:40:10.369862 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:10.544218 kubelet[1775]: E0117 00:40:10.544183 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.719264 kubelet[1775]: E0117 00:40:10.719145 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:11.720920 kubelet[1775]: E0117 00:40:11.720558 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:12.370309 kubelet[1775]: E0117 00:40:12.369541 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:12.637284 containerd[1450]: time="2026-01-17T00:40:12.636244376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.641753 containerd[1450]: time="2026-01-17T00:40:12.641130496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:40:12.646155 containerd[1450]: time="2026-01-17T00:40:12.645987464Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.651502 containerd[1450]: time="2026-01-17T00:40:12.651455048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:12.652876 containerd[1450]: time="2026-01-17T00:40:12.652679481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.120603799s" Jan 17 00:40:12.652876 containerd[1450]: time="2026-01-17T00:40:12.652722986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:40:12.663234 containerd[1450]: time="2026-01-17T00:40:12.663047981Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:40:12.696442 containerd[1450]: time="2026-01-17T00:40:12.696320428Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717\"" Jan 17 00:40:12.698712 containerd[1450]: time="2026-01-17T00:40:12.698647741Z" level=info msg="StartContainer for \"8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717\"" Jan 17 00:40:12.723322 kubelet[1775]: E0117 00:40:12.723276 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:12.769786 systemd[1]: Started cri-containerd-8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717.scope - libcontainer container 8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717. Jan 17 00:40:12.843476 containerd[1450]: time="2026-01-17T00:40:12.842572610Z" level=info msg="StartContainer for \"8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717\" returns successfully" Jan 17 00:40:13.564796 kubelet[1775]: E0117 00:40:13.564702 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:13.733171 kubelet[1775]: E0117 00:40:13.733060 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:14.306216 systemd[1]: cri-containerd-8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717.scope: Deactivated successfully. Jan 17 00:40:14.313269 systemd[1]: cri-containerd-8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717.scope: Consumed 1.106s CPU time. Jan 17 00:40:14.406613 kubelet[1775]: I0117 00:40:14.405414 1775 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:40:14.440553 systemd[1]: Created slice kubepods-besteffort-pod8d6ae2bf_19ec_4641_84d8_c96ea5455451.slice - libcontainer container kubepods-besteffort-pod8d6ae2bf_19ec_4641_84d8_c96ea5455451.slice. Jan 17 00:40:14.718019 containerd[1450]: time="2026-01-17T00:40:14.717895609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t94cc,Uid:8d6ae2bf-19ec-4641-84d8-c96ea5455451,Namespace:calico-system,Attempt:0,}" Jan 17 00:40:14.747481 kubelet[1775]: E0117 00:40:14.747436 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:14.836239 kubelet[1775]: E0117 00:40:14.833882 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:15.049069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717-rootfs.mount: Deactivated successfully. Jan 17 00:40:15.612647 containerd[1450]: time="2026-01-17T00:40:15.611604399Z" level=info msg="shim disconnected" id=8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717 namespace=k8s.io Jan 17 00:40:15.612647 containerd[1450]: time="2026-01-17T00:40:15.612043553Z" level=warning msg="cleaning up after shim disconnected" id=8fc37692003a0bf13c20e531b6b71d45044daf95bbf264749978acc12d2c2717 namespace=k8s.io Jan 17 00:40:15.612647 containerd[1450]: time="2026-01-17T00:40:15.612056331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:15.799205 kubelet[1775]: E0117 00:40:15.798830 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:15.921054 containerd[1450]: time="2026-01-17T00:40:15.915731049Z" level=error msg="Failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:15.931608 containerd[1450]: time="2026-01-17T00:40:15.926426686Z" level=error msg="encountered an error cleaning up failed sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:15.935206 containerd[1450]: time="2026-01-17T00:40:15.932049565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t94cc,Uid:8d6ae2bf-19ec-4641-84d8-c96ea5455451,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:15.938341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012-shm.mount: Deactivated successfully. Jan 17 00:40:15.939081 kubelet[1775]: E0117 00:40:15.938735 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:15.942319 kubelet[1775]: E0117 00:40:15.939152 1775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:15.942319 kubelet[1775]: E0117 00:40:15.939272 1775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t94cc" Jan 17 00:40:15.942319 kubelet[1775]: E0117 00:40:15.939560 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:16.804169 kubelet[1775]: E0117 00:40:16.802521 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:16.915216 kubelet[1775]: E0117 00:40:16.914086 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:16.918110 containerd[1450]: time="2026-01-17T00:40:16.916894485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:40:16.918511 kubelet[1775]: I0117 00:40:16.917465 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:16.921598 containerd[1450]: time="2026-01-17T00:40:16.921534826Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:16.924725 containerd[1450]: time="2026-01-17T00:40:16.921954945Z" level=info msg="Ensure that sandbox b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012 in task-service has been cleanup successfully" Jan 17 00:40:17.312645 containerd[1450]: time="2026-01-17T00:40:17.301897581Z" level=error msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" failed" error="failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:17.345820 kubelet[1775]: E0117 00:40:17.324231 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:17.345820 kubelet[1775]: E0117 00:40:17.343032 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012"} Jan 17 00:40:17.345820 kubelet[1775]: E0117 00:40:17.343711 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:17.345820 kubelet[1775]: E0117 00:40:17.343863 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:17.831736 kubelet[1775]: E0117 00:40:17.809712 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:18.930736 kubelet[1775]: E0117 00:40:18.840760 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:19.144484 kubelet[1775]: E0117 00:40:19.137970 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:19.799820 kubelet[1775]: I0117 00:40:19.797239 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkpq\" (UniqueName: \"kubernetes.io/projected/dbef3b46-8e01-4606-ab86-1c1363b0cf16-kube-api-access-bzkpq\") pod \"nginx-deployment-7fcdb87857-pdmjc\" (UID: \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\") " pod="default/nginx-deployment-7fcdb87857-pdmjc" Jan 17 00:40:20.199878 kubelet[1775]: E0117 00:40:20.142128 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:21.192825 kubelet[1775]: E0117 00:40:21.192545 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:22.554201 systemd[1]: Created slice kubepods-besteffort-poddbef3b46_8e01_4606_ab86_1c1363b0cf16.slice - libcontainer container kubepods-besteffort-poddbef3b46_8e01_4606_ab86_1c1363b0cf16.slice. Jan 17 00:40:22.582263 kubelet[1775]: E0117 00:40:22.559980 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:22.585523 kubelet[1775]: E0117 00:40:22.585448 1775 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.165s" Jan 17 00:40:22.588451 containerd[1450]: time="2026-01-17T00:40:22.588219461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pdmjc,Uid:dbef3b46-8e01-4606-ab86-1c1363b0cf16,Namespace:default,Attempt:0,}" Jan 17 00:40:22.689933 update_engine[1443]: I20260117 00:40:22.684962 1443 update_attempter.cc:509] Updating boot flags... Jan 17 00:40:22.821707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2440) Jan 17 00:40:22.904576 containerd[1450]: time="2026-01-17T00:40:22.904469395Z" level=error msg="Failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:22.905349 containerd[1450]: time="2026-01-17T00:40:22.905265186Z" level=error msg="encountered an error cleaning up failed sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:22.905349 containerd[1450]: time="2026-01-17T00:40:22.905424773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pdmjc,Uid:dbef3b46-8e01-4606-ab86-1c1363b0cf16,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:22.905895 kubelet[1775]: E0117 00:40:22.905785 1775 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:22.906090 kubelet[1775]: E0117 00:40:22.906019 1775 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pdmjc" Jan 17 00:40:22.906192 kubelet[1775]: E0117 00:40:22.906125 1775 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-pdmjc" Jan 17 00:40:22.907952 kubelet[1775]: E0117 00:40:22.906324 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-pdmjc_default(dbef3b46-8e01-4606-ab86-1c1363b0cf16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-pdmjc_default(dbef3b46-8e01-4606-ab86-1c1363b0cf16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pdmjc" podUID="dbef3b46-8e01-4606-ab86-1c1363b0cf16" Jan 17 00:40:22.908294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d-shm.mount: Deactivated successfully. Jan 17 00:40:22.957694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2440) Jan 17 00:40:23.572897 kubelet[1775]: E0117 00:40:23.567441 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:23.615068 kubelet[1775]: I0117 00:40:23.612708 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:40:23.615659 containerd[1450]: time="2026-01-17T00:40:23.615223323Z" level=info msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\"" Jan 17 00:40:23.623686 containerd[1450]: time="2026-01-17T00:40:23.617279506Z" level=info msg="Ensure that sandbox 168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d in task-service has been cleanup successfully" Jan 17 00:40:23.721715 containerd[1450]: time="2026-01-17T00:40:23.721337478Z" level=error msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\" failed" error="failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:23.721935 kubelet[1775]: E0117 00:40:23.721830 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:40:23.722001 kubelet[1775]: E0117 00:40:23.721938 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d"} Jan 17 00:40:23.722001 kubelet[1775]: E0117 00:40:23.721986 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:23.722126 kubelet[1775]: E0117 00:40:23.722021 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pdmjc" podUID="dbef3b46-8e01-4606-ab86-1c1363b0cf16" Jan 17 00:40:24.623228 kubelet[1775]: E0117 00:40:24.622969 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:25.625143 kubelet[1775]: E0117 00:40:25.624741 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:26.670812 kubelet[1775]: E0117 00:40:26.648729 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:27.679595 kubelet[1775]: E0117 00:40:27.679103 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:28.681466 kubelet[1775]: E0117 00:40:28.681052 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:29.405050 containerd[1450]: time="2026-01-17T00:40:29.404337381Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:29.587821 containerd[1450]: time="2026-01-17T00:40:29.587739689Z" level=error msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" failed" error="failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:29.691080 kubelet[1775]: E0117 00:40:29.689616 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:29.691080 kubelet[1775]: E0117 00:40:29.690264 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012"} Jan 17 00:40:29.691080 kubelet[1775]: E0117 00:40:29.690591 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:29.691080 kubelet[1775]: E0117 00:40:29.690675 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:29.738691 kubelet[1775]: E0117 00:40:29.712540 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:30.726991 kubelet[1775]: E0117 00:40:30.726495 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:31.729624 kubelet[1775]: E0117 00:40:31.729201 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:32.810809 kubelet[1775]: E0117 00:40:32.810216 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:33.856657 kubelet[1775]: E0117 00:40:33.854064 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:34.882283 kubelet[1775]: E0117 00:40:34.881497 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:35.888928 kubelet[1775]: E0117 00:40:35.887539 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:36.406733 containerd[1450]: time="2026-01-17T00:40:36.402591684Z" level=info msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\"" Jan 17 00:40:36.642678 containerd[1450]: time="2026-01-17T00:40:36.638839263Z" level=error msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\" failed" error="failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:36.645666 kubelet[1775]: E0117 00:40:36.642990 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:40:36.645666 kubelet[1775]: E0117 00:40:36.643111 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d"} Jan 17 00:40:36.645666 kubelet[1775]: E0117 00:40:36.643211 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:36.645666 kubelet[1775]: E0117 00:40:36.643248 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pdmjc" podUID="dbef3b46-8e01-4606-ab86-1c1363b0cf16" Jan 17 00:40:36.906337 kubelet[1775]: E0117 00:40:36.900937 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:37.902505 kubelet[1775]: E0117 00:40:37.901956 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:38.678298 kubelet[1775]: E0117 00:40:38.677728 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:38.906944 kubelet[1775]: E0117 00:40:38.905058 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:39.941204 kubelet[1775]: E0117 00:40:39.940874 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:40.980598 kubelet[1775]: E0117 00:40:40.966993 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:42.009906 kubelet[1775]: E0117 00:40:42.008480 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:42.385553 containerd[1450]: time="2026-01-17T00:40:42.382999716Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:42.828946 containerd[1450]: time="2026-01-17T00:40:42.826665652Z" level=error msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" failed" error="failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:42.830904 kubelet[1775]: E0117 00:40:42.827179 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:42.830904 kubelet[1775]: E0117 00:40:42.827648 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012"} Jan 17 00:40:42.830904 kubelet[1775]: E0117 00:40:42.827772 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:42.830904 kubelet[1775]: E0117 00:40:42.827914 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d6ae2bf-19ec-4641-84d8-c96ea5455451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:43.018797 kubelet[1775]: E0117 00:40:43.017840 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:44.022327 kubelet[1775]: E0117 00:40:44.021579 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:45.037276 kubelet[1775]: E0117 00:40:45.036914 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:46.071626 kubelet[1775]: E0117 00:40:46.070386 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:47.074942 kubelet[1775]: E0117 00:40:47.074552 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:48.076643 kubelet[1775]: E0117 00:40:48.075689 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:48.398980 containerd[1450]: time="2026-01-17T00:40:48.389294380Z" level=info msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\"" Jan 17 00:40:48.634848 containerd[1450]: time="2026-01-17T00:40:48.632229537Z" level=error msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\" failed" error="failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:40:48.640980 kubelet[1775]: E0117 00:40:48.635143 1775 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:40:48.640980 kubelet[1775]: E0117 00:40:48.635654 1775 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d"} Jan 17 00:40:48.640980 kubelet[1775]: E0117 00:40:48.635736 1775 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:40:48.640980 kubelet[1775]: E0117 00:40:48.635768 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbef3b46-8e01-4606-ab86-1c1363b0cf16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-pdmjc" podUID="dbef3b46-8e01-4606-ab86-1c1363b0cf16" Jan 17 00:40:49.076739 kubelet[1775]: E0117 00:40:49.076018 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:49.377234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022827972.mount: Deactivated successfully. Jan 17 00:40:49.470835 containerd[1450]: time="2026-01-17T00:40:49.470695734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:49.477761 containerd[1450]: time="2026-01-17T00:40:49.473918373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:40:49.477761 containerd[1450]: time="2026-01-17T00:40:49.475855335Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:49.483851 containerd[1450]: time="2026-01-17T00:40:49.482250434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:49.489622 containerd[1450]: time="2026-01-17T00:40:49.489440461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 32.572498188s" Jan 17 00:40:49.489622 containerd[1450]: time="2026-01-17T00:40:49.489495266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:40:49.532811 containerd[1450]: time="2026-01-17T00:40:49.532724338Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:40:49.601310 containerd[1450]: time="2026-01-17T00:40:49.601035691Z" level=info msg="CreateContainer within sandbox \"fe53a1bbe62975f289ac9e10dd4ca543c5f47ef9c3b32a735ed56bec3139b328\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2572bf6c9ee1d3ec7c9c754a86aee40d695b8e6d3808834d21ab494618d2ccf8\"" Jan 17 00:40:49.603607 containerd[1450]: time="2026-01-17T00:40:49.602550999Z" level=info msg="StartContainer for \"2572bf6c9ee1d3ec7c9c754a86aee40d695b8e6d3808834d21ab494618d2ccf8\"" Jan 17 00:40:49.939145 systemd[1]: Started cri-containerd-2572bf6c9ee1d3ec7c9c754a86aee40d695b8e6d3808834d21ab494618d2ccf8.scope - libcontainer container 2572bf6c9ee1d3ec7c9c754a86aee40d695b8e6d3808834d21ab494618d2ccf8. Jan 17 00:40:50.089062 kubelet[1775]: E0117 00:40:50.088715 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:50.140444 containerd[1450]: time="2026-01-17T00:40:50.139538560Z" level=info msg="StartContainer for \"2572bf6c9ee1d3ec7c9c754a86aee40d695b8e6d3808834d21ab494618d2ccf8\" returns successfully" Jan 17 00:40:50.801032 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:40:50.803780 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:40:51.068992 kubelet[1775]: E0117 00:40:51.066404 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:51.091145 kubelet[1775]: E0117 00:40:51.091045 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:52.081662 kubelet[1775]: E0117 00:40:52.081158 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:52.092630 kubelet[1775]: E0117 00:40:52.092037 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:53.096433 kubelet[1775]: E0117 00:40:53.092669 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:54.113799 kubelet[1775]: E0117 00:40:54.112860 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:54.215730 kernel: bpftool[2799]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:40:54.991343 systemd-networkd[1372]: vxlan.calico: Link UP Jan 17 00:40:54.991409 systemd-networkd[1372]: vxlan.calico: Gained carrier Jan 17 00:40:55.122660 kubelet[1775]: E0117 00:40:55.121842 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:56.100525 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Jan 17 00:40:56.123076 kubelet[1775]: E0117 00:40:56.122912 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:57.124024 kubelet[1775]: E0117 00:40:57.123742 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:57.389176 containerd[1450]: time="2026-01-17T00:40:57.387687137Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:57.541321 kubelet[1775]: I0117 00:40:57.541117 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n9ksp" podStartSLOduration=12.534858625 podStartE2EDuration="58.541072082s" podCreationTimestamp="2026-01-17 00:39:59 +0000 UTC" firstStartedPulling="2026-01-17 00:40:03.484400009 +0000 UTC m=+6.819138782" lastFinishedPulling="2026-01-17 00:40:49.490613466 +0000 UTC m=+52.825352239" observedRunningTime="2026-01-17 00:40:51.194666845 +0000 UTC m=+54.529405618" watchObservedRunningTime="2026-01-17 00:40:57.541072082 +0000 UTC m=+60.875810886" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" iface="eth0" netns="/var/run/netns/cni-2e948e02-9a8f-58c4-22ad-f6604b110467" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" iface="eth0" netns="/var/run/netns/cni-2e948e02-9a8f-58c4-22ad-f6604b110467" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" iface="eth0" netns="/var/run/netns/cni-2e948e02-9a8f-58c4-22ad-f6604b110467" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.546 [INFO][2885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.676 [INFO][2894] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.677 [INFO][2894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.677 [INFO][2894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.713 [WARNING][2894] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.713 [INFO][2894] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.721 [INFO][2894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:40:57.739756 containerd[1450]: 2026-01-17 00:40:57.735 [INFO][2885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:57.750238 containerd[1450]: time="2026-01-17T00:40:57.748983998Z" level=info msg="TearDown network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" successfully" Jan 17 00:40:57.752110 systemd[1]: run-netns-cni\x2d2e948e02\x2d9a8f\x2d58c4\x2d22ad\x2df6604b110467.mount: Deactivated successfully. Jan 17 00:40:57.761195 containerd[1450]: time="2026-01-17T00:40:57.754869775Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" returns successfully" Jan 17 00:40:57.761195 containerd[1450]: time="2026-01-17T00:40:57.757112166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t94cc,Uid:8d6ae2bf-19ec-4641-84d8-c96ea5455451,Namespace:calico-system,Attempt:1,}" Jan 17 00:40:58.129632 kubelet[1775]: E0117 00:40:58.128573 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:58.418504 systemd-networkd[1372]: calid015edb206e: Link UP Jan 17 00:40:58.424051 systemd-networkd[1372]: calid015edb206e: Gained carrier Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.069 [INFO][2902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-csi--node--driver--t94cc-eth0 csi-node-driver- calico-system 8d6ae2bf-19ec-4641-84d8-c96ea5455451 1574 0 2026-01-17 00:39:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.113 csi-node-driver-t94cc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid015edb206e [] [] }} ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.069 [INFO][2902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.179 [INFO][2917] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" HandleID="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.179 [INFO][2917] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" HandleID="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005991a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.113", "pod":"csi-node-driver-t94cc", "timestamp":"2026-01-17 00:40:58.179116421 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.179 [INFO][2917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.179 [INFO][2917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.179 [INFO][2917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.213 [INFO][2917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.250 [INFO][2917] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.295 [INFO][2917] ipam/ipam.go 511: Trying affinity for 192.168.6.192/26 host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.323 [INFO][2917] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.346 [INFO][2917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.347 [INFO][2917] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.352 [INFO][2917] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9 Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.376 [INFO][2917] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.390 [INFO][2917] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.193/26] block=192.168.6.192/26 handle="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.390 [INFO][2917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.193/26] handle="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" host="10.0.0.113" Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.390 [INFO][2917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:40:58.501140 containerd[1450]: 2026-01-17 00:40:58.390 [INFO][2917] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.193/26] IPv6=[] ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" HandleID="k8s-pod-network.6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.402 [INFO][2902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--t94cc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6ae2bf-19ec-4641-84d8-c96ea5455451", ResourceVersion:"1574", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"csi-node-driver-t94cc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid015edb206e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.403 [INFO][2902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.193/32] ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.403 [INFO][2902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid015edb206e ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.421 [INFO][2902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.425 [INFO][2902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--t94cc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6ae2bf-19ec-4641-84d8-c96ea5455451", ResourceVersion:"1574", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9", Pod:"csi-node-driver-t94cc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid015edb206e", MAC:"5e:78:09:31:9a:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:40:58.502720 containerd[1450]: 2026-01-17 00:40:58.485 [INFO][2902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9" Namespace="calico-system" Pod="csi-node-driver-t94cc" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:58.586961 containerd[1450]: time="2026-01-17T00:40:58.586183725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:58.586961 containerd[1450]: time="2026-01-17T00:40:58.586293285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:58.586961 containerd[1450]: time="2026-01-17T00:40:58.586319495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:58.586961 containerd[1450]: time="2026-01-17T00:40:58.586751816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:58.715150 kubelet[1775]: E0117 00:40:58.708733 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:58.783677 systemd[1]: Started cri-containerd-6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9.scope - libcontainer container 6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9. Jan 17 00:40:58.821837 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:40:58.877202 containerd[1450]: time="2026-01-17T00:40:58.877070494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t94cc,Uid:8d6ae2bf-19ec-4641-84d8-c96ea5455451,Namespace:calico-system,Attempt:1,} returns sandbox id \"6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9\"" Jan 17 00:40:58.882978 containerd[1450]: time="2026-01-17T00:40:58.882934918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:40:58.889847 containerd[1450]: time="2026-01-17T00:40:58.889478444Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:58.991812 containerd[1450]: time="2026-01-17T00:40:58.991237160Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:40:58.996332 containerd[1450]: time="2026-01-17T00:40:58.996235536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:40:58.996494 containerd[1450]: time="2026-01-17T00:40:58.996424128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:40:58.996791 kubelet[1775]: E0117 00:40:58.996687 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:40:58.996860 kubelet[1775]: E0117 00:40:58.996798 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:40:58.997304 kubelet[1775]: E0117 00:40:58.997191 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:40:59.004051 containerd[1450]: time="2026-01-17T00:40:59.003700644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:40:59.121577 containerd[1450]: time="2026-01-17T00:40:59.120852997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:40:59.126897 containerd[1450]: time="2026-01-17T00:40:59.125259873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:40:59.126897 containerd[1450]: time="2026-01-17T00:40:59.125884102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:40:59.133626 kubelet[1775]: E0117 00:40:59.132541 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:40:59.133626 kubelet[1775]: E0117 00:40:59.133340 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:40:59.137255 kubelet[1775]: E0117 00:40:59.135348 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:40:59.137255 kubelet[1775]: E0117 00:40:59.135465 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:40:59.137255 kubelet[1775]: E0117 00:40:59.136779 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:59.185135 kubelet[1775]: E0117 00:40:59.185052 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.037 [WARNING][2987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--t94cc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6ae2bf-19ec-4641-84d8-c96ea5455451", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9", Pod:"csi-node-driver-t94cc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid015edb206e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.037 [INFO][2987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.037 [INFO][2987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" iface="eth0" netns="" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.037 [INFO][2987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.037 [INFO][2987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.176 [INFO][2995] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.177 [INFO][2995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.177 [INFO][2995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.211 [WARNING][2995] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.211 [INFO][2995] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.237 [INFO][2995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:40:59.250742 containerd[1450]: 2026-01-17 00:40:59.242 [INFO][2987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.250742 containerd[1450]: time="2026-01-17T00:40:59.249630911Z" level=info msg="TearDown network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" successfully" Jan 17 00:40:59.250742 containerd[1450]: time="2026-01-17T00:40:59.249736755Z" level=info msg="StopPodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" returns successfully" Jan 17 00:40:59.256168 containerd[1450]: time="2026-01-17T00:40:59.256088583Z" level=info msg="RemovePodSandbox for \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:59.256457 containerd[1450]: time="2026-01-17T00:40:59.256320048Z" level=info msg="Forcibly stopping sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\"" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.372 [WARNING][3012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--t94cc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d6ae2bf-19ec-4641-84d8-c96ea5455451", ResourceVersion:"1588", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"6f64323bc364369358c8fad6b92b3d1e8e9c4961fc7195c8cb2fcf1346b10ef9", Pod:"csi-node-driver-t94cc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid015edb206e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.373 [INFO][3012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.373 [INFO][3012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" iface="eth0" netns="" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.373 [INFO][3012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.373 [INFO][3012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.450 [INFO][3021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.451 [INFO][3021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.452 [INFO][3021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.477 [WARNING][3021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.477 [INFO][3021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" HandleID="k8s-pod-network.b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Workload="10.0.0.113-k8s-csi--node--driver--t94cc-eth0" Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.483 [INFO][3021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:40:59.493318 containerd[1450]: 2026-01-17 00:40:59.487 [INFO][3012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012" Jan 17 00:40:59.493318 containerd[1450]: time="2026-01-17T00:40:59.491607519Z" level=info msg="TearDown network for sandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" successfully" Jan 17 00:40:59.504272 containerd[1450]: time="2026-01-17T00:40:59.503972761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:40:59.504272 containerd[1450]: time="2026-01-17T00:40:59.504156043Z" level=info msg="RemovePodSandbox \"b504686a979420ff4218c8c24abab4389602d9b1b9e903e13826aca43f67b012\" returns successfully" Jan 17 00:41:00.143340 kubelet[1775]: E0117 00:41:00.138163 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:00.203312 kubelet[1775]: E0117 00:41:00.203251 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:41:00.380635 containerd[1450]: time="2026-01-17T00:41:00.380132579Z" level=info msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\"" Jan 17 00:41:00.396185 systemd-networkd[1372]: calid015edb206e: Gained IPv6LL Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.685 [INFO][3041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.685 [INFO][3041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" iface="eth0" netns="/var/run/netns/cni-6a5b5261-76ec-70d3-51ec-29a573fe86c2" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.686 [INFO][3041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" iface="eth0" netns="/var/run/netns/cni-6a5b5261-76ec-70d3-51ec-29a573fe86c2" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.686 [INFO][3041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" iface="eth0" netns="/var/run/netns/cni-6a5b5261-76ec-70d3-51ec-29a573fe86c2" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.686 [INFO][3041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.686 [INFO][3041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.928 [INFO][3050] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" HandleID="k8s-pod-network.168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.929 [INFO][3050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.929 [INFO][3050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.968 [WARNING][3050] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" HandleID="k8s-pod-network.168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.968 [INFO][3050] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" HandleID="k8s-pod-network.168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:00.991 [INFO][3050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:41:01.078519 containerd[1450]: 2026-01-17 00:41:01.007 [INFO][3041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d" Jan 17 00:41:01.113660 containerd[1450]: time="2026-01-17T00:41:01.100595229Z" level=info msg="TearDown network for sandbox \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\" successfully" Jan 17 00:41:01.113660 containerd[1450]: time="2026-01-17T00:41:01.101228274Z" level=info msg="StopPodSandbox for \"168720d321cda5cd994c78f904dab880ec78a321c5bcf791b35dd0856ca6a75d\" returns successfully" Jan 17 00:41:01.113660 containerd[1450]: time="2026-01-17T00:41:01.110979330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pdmjc,Uid:dbef3b46-8e01-4606-ab86-1c1363b0cf16,Namespace:default,Attempt:1,}" Jan 17 00:41:01.124203 systemd[1]: run-netns-cni\x2d6a5b5261\x2d76ec\x2d70d3\x2d51ec\x2d29a573fe86c2.mount: Deactivated successfully. Jan 17 00:41:01.146528 kubelet[1775]: E0117 00:41:01.146476 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:02.010589 systemd-networkd[1372]: calif946bac0474: Link UP Jan 17 00:41:02.010857 systemd-networkd[1372]: calif946bac0474: Gained carrier Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.580 [INFO][3065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0 nginx-deployment-7fcdb87857- default dbef3b46-8e01-4606-ab86-1c1363b0cf16 1599 0 2026-01-17 00:40:19 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.113 nginx-deployment-7fcdb87857-pdmjc eth0 default [] [] [kns.default ksa.default.default] calif946bac0474 [] [] }} ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.580 [INFO][3065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.764 [INFO][3079] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" HandleID="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.765 [INFO][3079] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" HandleID="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"nginx-deployment-7fcdb87857-pdmjc", "timestamp":"2026-01-17 00:41:01.764230713 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.767 [INFO][3079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.769 [INFO][3079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.769 [INFO][3079] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.814 [INFO][3079] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.868 [INFO][3079] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.906 [INFO][3079] ipam/ipam.go 511: Trying affinity for 192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.920 [INFO][3079] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.933 [INFO][3079] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.934 [INFO][3079] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.943 [INFO][3079] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56 Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.959 [INFO][3079] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.993 [INFO][3079] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.194/26] block=192.168.6.192/26 handle="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.993 [INFO][3079] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.194/26] handle="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" host="10.0.0.113" Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.993 [INFO][3079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:41:02.066738 containerd[1450]: 2026-01-17 00:41:01.993 [INFO][3079] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.194/26] IPv6=[] ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" HandleID="k8s-pod-network.cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Workload="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:01.998 [INFO][3065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"dbef3b46-8e01-4606-ab86-1c1363b0cf16", ResourceVersion:"1599", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-pdmjc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif946bac0474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:01.998 [INFO][3065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.194/32] ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:01.998 [INFO][3065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif946bac0474 ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:02.008 [INFO][3065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:02.014 [INFO][3065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"dbef3b46-8e01-4606-ab86-1c1363b0cf16", ResourceVersion:"1599", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 40, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56", Pod:"nginx-deployment-7fcdb87857-pdmjc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif946bac0474", MAC:"6e:fa:5b:ee:f8:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:02.067861 containerd[1450]: 2026-01-17 00:41:02.045 [INFO][3065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56" Namespace="default" Pod="nginx-deployment-7fcdb87857-pdmjc" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--7fcdb87857--pdmjc-eth0" Jan 17 00:41:02.148041 kubelet[1775]: E0117 00:41:02.147880 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:02.179974 containerd[1450]: time="2026-01-17T00:41:02.177516173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:02.179974 containerd[1450]: time="2026-01-17T00:41:02.179883557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:02.179974 containerd[1450]: time="2026-01-17T00:41:02.179901241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:02.181839 containerd[1450]: time="2026-01-17T00:41:02.180022303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:02.290737 systemd[1]: Started cri-containerd-cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56.scope - libcontainer container cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56. Jan 17 00:41:02.420968 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:41:02.732230 containerd[1450]: time="2026-01-17T00:41:02.731669702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-pdmjc,Uid:dbef3b46-8e01-4606-ab86-1c1363b0cf16,Namespace:default,Attempt:1,} returns sandbox id \"cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56\"" Jan 17 00:41:02.745791 containerd[1450]: time="2026-01-17T00:41:02.745208389Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:41:03.159327 kubelet[1775]: E0117 00:41:03.154992 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:03.720787 systemd-networkd[1372]: calif946bac0474: Gained IPv6LL Jan 17 00:41:04.159453 kubelet[1775]: E0117 00:41:04.159177 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:05.164552 kubelet[1775]: E0117 00:41:05.164269 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:06.172184 kubelet[1775]: E0117 00:41:06.170446 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:07.176609 kubelet[1775]: E0117 00:41:07.174301 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:07.762174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531406568.mount: Deactivated successfully. Jan 17 00:41:08.178053 kubelet[1775]: E0117 00:41:08.177742 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:09.181519 kubelet[1775]: E0117 00:41:09.179091 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:10.185766 kubelet[1775]: E0117 00:41:10.185654 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:10.854304 containerd[1450]: time="2026-01-17T00:41:10.854227678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:10.856918 containerd[1450]: time="2026-01-17T00:41:10.856796566Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63840319" Jan 17 00:41:10.864198 containerd[1450]: time="2026-01-17T00:41:10.861334122Z" level=info msg="ImageCreate event name:\"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:10.869192 containerd[1450]: time="2026-01-17T00:41:10.868453643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:10.870493 containerd[1450]: time="2026-01-17T00:41:10.870427006Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 8.125172299s" Jan 17 00:41:10.870493 containerd[1450]: time="2026-01-17T00:41:10.870481923Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:41:10.874054 containerd[1450]: time="2026-01-17T00:41:10.873992044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:41:10.892346 containerd[1450]: time="2026-01-17T00:41:10.892257099Z" level=info msg="CreateContainer within sandbox \"cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 00:41:10.958642 containerd[1450]: time="2026-01-17T00:41:10.955833975Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:41:10.958642 containerd[1450]: time="2026-01-17T00:41:10.955944093Z" level=info msg="CreateContainer within sandbox \"cf55861151bd9fb4d24640c84183012b557e8470f4a01a89dd3a2bf61e127e56\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1b2bb3cd29d7af3349b0e7d15027500146c6c85b203cae0d2cf7f9d2b73ffd3b\"" Jan 17 00:41:10.958642 containerd[1450]: time="2026-01-17T00:41:10.956685345Z" level=info msg="StartContainer for \"1b2bb3cd29d7af3349b0e7d15027500146c6c85b203cae0d2cf7f9d2b73ffd3b\"" Jan 17 00:41:10.959694 containerd[1450]: time="2026-01-17T00:41:10.959622608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:41:10.959829 containerd[1450]: time="2026-01-17T00:41:10.959705567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:41:10.959946 kubelet[1775]: E0117 00:41:10.959846 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:41:10.959946 kubelet[1775]: E0117 00:41:10.959900 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:41:10.960952 kubelet[1775]: E0117 00:41:10.960051 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:41:10.966938 containerd[1450]: time="2026-01-17T00:41:10.966873914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:41:11.042102 containerd[1450]: time="2026-01-17T00:41:11.042036025Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:41:11.063247 containerd[1450]: time="2026-01-17T00:41:11.062882826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:41:11.063247 containerd[1450]: time="2026-01-17T00:41:11.063027152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:41:11.064331 kubelet[1775]: E0117 00:41:11.063934 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:41:11.064331 kubelet[1775]: E0117 00:41:11.064026 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:41:11.064331 kubelet[1775]: E0117 00:41:11.064224 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:41:11.073004 kubelet[1775]: E0117 00:41:11.072930 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:41:11.206478 kubelet[1775]: E0117 00:41:11.201921 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:11.315515 systemd[1]: Started cri-containerd-1b2bb3cd29d7af3349b0e7d15027500146c6c85b203cae0d2cf7f9d2b73ffd3b.scope - libcontainer container 1b2bb3cd29d7af3349b0e7d15027500146c6c85b203cae0d2cf7f9d2b73ffd3b. Jan 17 00:41:11.546235 containerd[1450]: time="2026-01-17T00:41:11.544977036Z" level=info msg="StartContainer for \"1b2bb3cd29d7af3349b0e7d15027500146c6c85b203cae0d2cf7f9d2b73ffd3b\" returns successfully" Jan 17 00:41:12.295283 kubelet[1775]: E0117 00:41:12.232090 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:13.547468 kubelet[1775]: E0117 00:41:13.543613 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:14.222833 kubelet[1775]: I0117 00:41:14.222274 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-pdmjc" podStartSLOduration=47.090504955 podStartE2EDuration="55.222252159s" podCreationTimestamp="2026-01-17 00:40:19 +0000 UTC" firstStartedPulling="2026-01-17 00:41:02.740975687 +0000 UTC m=+66.075714460" lastFinishedPulling="2026-01-17 00:41:10.872722892 +0000 UTC m=+74.207461664" observedRunningTime="2026-01-17 00:41:14.221847992 +0000 UTC m=+77.556586775" watchObservedRunningTime="2026-01-17 00:41:14.222252159 +0000 UTC m=+77.556990932" Jan 17 00:41:14.554633 kubelet[1775]: E0117 00:41:14.552076 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:15.841656 kubelet[1775]: E0117 00:41:15.839476 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:16.913844 kubelet[1775]: E0117 00:41:16.913203 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:17.948292 kubelet[1775]: E0117 00:41:17.947257 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:18.393449 kubelet[1775]: E0117 00:41:18.389274 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:18.681322 kubelet[1775]: E0117 00:41:18.680773 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:18.949704 kubelet[1775]: E0117 00:41:18.949412 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:19.950641 kubelet[1775]: E0117 00:41:19.950321 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:20.650289 kubelet[1775]: I0117 00:41:20.650113 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5545c17a-a4e2-48e5-b3d6-f42f50e6ec66-data\") pod \"nfs-server-provisioner-0\" (UID: \"5545c17a-a4e2-48e5-b3d6-f42f50e6ec66\") " pod="default/nfs-server-provisioner-0" Jan 17 00:41:20.650289 kubelet[1775]: I0117 00:41:20.650166 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8g2\" (UniqueName: \"kubernetes.io/projected/5545c17a-a4e2-48e5-b3d6-f42f50e6ec66-kube-api-access-qg8g2\") pod \"nfs-server-provisioner-0\" (UID: \"5545c17a-a4e2-48e5-b3d6-f42f50e6ec66\") " pod="default/nfs-server-provisioner-0" Jan 17 00:41:20.664975 systemd[1]: Created slice kubepods-besteffort-pod5545c17a_a4e2_48e5_b3d6_f42f50e6ec66.slice - libcontainer container kubepods-besteffort-pod5545c17a_a4e2_48e5_b3d6_f42f50e6ec66.slice. Jan 17 00:41:20.953198 kubelet[1775]: E0117 00:41:20.952952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:20.983142 containerd[1450]: time="2026-01-17T00:41:20.980138002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5545c17a-a4e2-48e5-b3d6-f42f50e6ec66,Namespace:default,Attempt:0,}" Jan 17 00:41:21.698721 systemd-networkd[1372]: cali60e51b789ff: Link UP Jan 17 00:41:21.698980 systemd-networkd[1372]: cali60e51b789ff: Gained carrier Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.221 [INFO][3252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5545c17a-a4e2-48e5-b3d6-f42f50e6ec66 1715 0 2026-01-17 00:41:20 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.113 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.222 [INFO][3252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.388 [INFO][3267] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" HandleID="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.390 [INFO][3267] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" HandleID="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-17 00:41:21.388684291 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.390 [INFO][3267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.390 [INFO][3267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.390 [INFO][3267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.432 [INFO][3267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.486 [INFO][3267] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.522 [INFO][3267] ipam/ipam.go 511: Trying affinity for 192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.539 [INFO][3267] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.582 [INFO][3267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.582 [INFO][3267] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.604 [INFO][3267] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67 Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.622 [INFO][3267] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.668 [INFO][3267] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.195/26] block=192.168.6.192/26 handle="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.669 [INFO][3267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.195/26] handle="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" host="10.0.0.113" Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.679 [INFO][3267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:41:21.730141 containerd[1450]: 2026-01-17 00:41:21.679 [INFO][3267] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.195/26] IPv6=[] ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" HandleID="k8s-pod-network.d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.731084 containerd[1450]: 2026-01-17 00:41:21.687 [INFO][3252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5545c17a-a4e2-48e5-b3d6-f42f50e6ec66", ResourceVersion:"1715", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:21.731084 containerd[1450]: 2026-01-17 00:41:21.687 [INFO][3252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.195/32] ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.731084 containerd[1450]: 2026-01-17 00:41:21.687 [INFO][3252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.731084 containerd[1450]: 2026-01-17 00:41:21.699 [INFO][3252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.731285 containerd[1450]: 2026-01-17 00:41:21.699 [INFO][3252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5545c17a-a4e2-48e5-b3d6-f42f50e6ec66", ResourceVersion:"1715", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"26:91:8e:82:37:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:21.731285 containerd[1450]: 2026-01-17 00:41:21.722 [INFO][3252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:41:21.830649 containerd[1450]: time="2026-01-17T00:41:21.830340996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:21.830649 containerd[1450]: time="2026-01-17T00:41:21.830506221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:21.830649 containerd[1450]: time="2026-01-17T00:41:21.830530657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:21.831781 containerd[1450]: time="2026-01-17T00:41:21.831094013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:21.895028 systemd[1]: Started cri-containerd-d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67.scope - libcontainer container d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67. Jan 17 00:41:21.917044 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:41:21.953290 kubelet[1775]: E0117 00:41:21.953121 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:21.980433 containerd[1450]: time="2026-01-17T00:41:21.980255676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5545c17a-a4e2-48e5-b3d6-f42f50e6ec66,Namespace:default,Attempt:0,} returns sandbox id \"d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67\"" Jan 17 00:41:21.984923 containerd[1450]: time="2026-01-17T00:41:21.984856460Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:41:22.433116 kubelet[1775]: E0117 00:41:22.431698 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:22.955450 kubelet[1775]: E0117 00:41:22.954919 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:23.243107 systemd-networkd[1372]: cali60e51b789ff: Gained IPv6LL Jan 17 00:41:24.001671 kubelet[1775]: E0117 00:41:24.001216 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:25.005910 kubelet[1775]: E0117 00:41:25.005285 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:25.384283 kubelet[1775]: E0117 00:41:25.383563 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:41:26.010399 kubelet[1775]: E0117 00:41:26.009854 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:27.011269 kubelet[1775]: E0117 00:41:27.010848 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:28.057886 kubelet[1775]: E0117 00:41:28.031311 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:28.552202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783647316.mount: Deactivated successfully. Jan 17 00:41:29.064305 kubelet[1775]: E0117 00:41:29.063234 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:30.074811 kubelet[1775]: E0117 00:41:30.074321 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:31.077793 kubelet[1775]: E0117 00:41:31.076433 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:32.081658 kubelet[1775]: E0117 00:41:32.079966 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:33.083058 kubelet[1775]: E0117 00:41:33.082800 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:34.086161 kubelet[1775]: E0117 00:41:34.084800 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:35.101874 kubelet[1775]: E0117 00:41:35.101256 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:35.446124 containerd[1450]: time="2026-01-17T00:41:35.437223083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:35.453768 containerd[1450]: time="2026-01-17T00:41:35.453651480Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 00:41:35.460517 containerd[1450]: time="2026-01-17T00:41:35.459815559Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:35.476610 containerd[1450]: time="2026-01-17T00:41:35.476127564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:35.478538 containerd[1450]: time="2026-01-17T00:41:35.477180379Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 13.492262294s" Jan 17 00:41:35.480745 containerd[1450]: time="2026-01-17T00:41:35.477315481Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 00:41:35.497212 containerd[1450]: time="2026-01-17T00:41:35.496922300Z" level=info msg="CreateContainer within sandbox \"d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:41:35.553472 containerd[1450]: time="2026-01-17T00:41:35.553206325Z" level=info msg="CreateContainer within sandbox \"d28704c49e3dbcf68d95598f5dd9d91c8fe5a20fe9b8fd50daa1df33c2caaf67\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fd24f894fc8c546b3a5f5499ecfd8ec460940bb3e455ef0f41ae8e9a037c66db\"" Jan 17 00:41:35.554999 containerd[1450]: time="2026-01-17T00:41:35.554906145Z" level=info msg="StartContainer for \"fd24f894fc8c546b3a5f5499ecfd8ec460940bb3e455ef0f41ae8e9a037c66db\"" Jan 17 00:41:35.653554 systemd[1]: Started cri-containerd-fd24f894fc8c546b3a5f5499ecfd8ec460940bb3e455ef0f41ae8e9a037c66db.scope - libcontainer container fd24f894fc8c546b3a5f5499ecfd8ec460940bb3e455ef0f41ae8e9a037c66db. Jan 17 00:41:35.712909 containerd[1450]: time="2026-01-17T00:41:35.712719622Z" level=info msg="StartContainer for \"fd24f894fc8c546b3a5f5499ecfd8ec460940bb3e455ef0f41ae8e9a037c66db\" returns successfully" Jan 17 00:41:36.106178 kubelet[1775]: E0117 00:41:36.103463 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:36.334531 kubelet[1775]: I0117 00:41:36.334293 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.835126099 podStartE2EDuration="16.334268301s" podCreationTimestamp="2026-01-17 00:41:20 +0000 UTC" firstStartedPulling="2026-01-17 00:41:21.984207893 +0000 UTC m=+85.318946666" lastFinishedPulling="2026-01-17 00:41:35.483350085 +0000 UTC m=+98.818088868" observedRunningTime="2026-01-17 00:41:36.326183842 +0000 UTC m=+99.660922625" watchObservedRunningTime="2026-01-17 00:41:36.334268301 +0000 UTC m=+99.669007094" Jan 17 00:41:37.111456 kubelet[1775]: E0117 00:41:37.110871 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:37.376323 containerd[1450]: time="2026-01-17T00:41:37.373783114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:41:37.471759 containerd[1450]: time="2026-01-17T00:41:37.471524516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:41:37.481332 containerd[1450]: time="2026-01-17T00:41:37.480013830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:41:37.481332 containerd[1450]: time="2026-01-17T00:41:37.480151186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:41:37.481603 kubelet[1775]: E0117 00:41:37.480346 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:41:37.481603 kubelet[1775]: E0117 00:41:37.480489 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:41:37.481603 kubelet[1775]: E0117 00:41:37.480738 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:41:37.490538 containerd[1450]: time="2026-01-17T00:41:37.490033783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:41:37.596298 containerd[1450]: time="2026-01-17T00:41:37.594041784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:41:37.602733 containerd[1450]: time="2026-01-17T00:41:37.602547712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:41:37.602733 containerd[1450]: time="2026-01-17T00:41:37.602666554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:41:37.602996 kubelet[1775]: E0117 00:41:37.602887 1775 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:41:37.602996 kubelet[1775]: E0117 00:41:37.602966 1775 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:41:37.603187 kubelet[1775]: E0117 00:41:37.603109 1775 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v64sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-t94cc_calico-system(8d6ae2bf-19ec-4641-84d8-c96ea5455451): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:41:37.605007 kubelet[1775]: E0117 00:41:37.604814 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:41:38.116466 kubelet[1775]: E0117 00:41:38.113285 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:38.688808 kubelet[1775]: E0117 00:41:38.687689 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:39.123943 kubelet[1775]: E0117 00:41:39.119133 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:40.123887 kubelet[1775]: E0117 00:41:40.123022 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:41.125215 kubelet[1775]: E0117 00:41:41.124491 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:41.412896 systemd[1]: Created slice kubepods-besteffort-pod5dabcbcc_c62c_4560_9840_5145108b184c.slice - libcontainer container kubepods-besteffort-pod5dabcbcc_c62c_4560_9840_5145108b184c.slice. Jan 17 00:41:41.478926 kubelet[1775]: I0117 00:41:41.478798 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-010a3c27-4a7e-4998-9f24-039808dbb563\" (UniqueName: \"kubernetes.io/nfs/5dabcbcc-c62c-4560-9840-5145108b184c-pvc-010a3c27-4a7e-4998-9f24-039808dbb563\") pod \"test-pod-1\" (UID: \"5dabcbcc-c62c-4560-9840-5145108b184c\") " pod="default/test-pod-1" Jan 17 00:41:41.478926 kubelet[1775]: I0117 00:41:41.478900 1775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf28d\" (UniqueName: \"kubernetes.io/projected/5dabcbcc-c62c-4560-9840-5145108b184c-kube-api-access-mf28d\") pod \"test-pod-1\" (UID: \"5dabcbcc-c62c-4560-9840-5145108b184c\") " pod="default/test-pod-1" Jan 17 00:41:41.782149 kernel: FS-Cache: Loaded Jan 17 00:41:42.029013 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:41:42.029204 kernel: RPC: Registered udp transport module. Jan 17 00:41:42.029264 kernel: RPC: Registered tcp transport module. Jan 17 00:41:42.034793 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:41:42.037560 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:41:42.125408 kubelet[1775]: E0117 00:41:42.125267 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:42.812005 kernel: NFS: Registering the id_resolver key type Jan 17 00:41:42.812146 kernel: Key type id_resolver registered Jan 17 00:41:42.816658 kernel: Key type id_legacy registered Jan 17 00:41:42.934826 nfsidmap[3484]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:41:42.955739 nfsidmap[3487]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 00:41:43.126756 kubelet[1775]: E0117 00:41:43.126324 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:43.228806 containerd[1450]: time="2026-01-17T00:41:43.227417146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5dabcbcc-c62c-4560-9840-5145108b184c,Namespace:default,Attempt:0,}" Jan 17 00:41:43.978133 systemd-networkd[1372]: cali5ec59c6bf6e: Link UP Jan 17 00:41:43.978568 systemd-networkd[1372]: cali5ec59c6bf6e: Gained carrier Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.513 [INFO][3490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-test--pod--1-eth0 default 5dabcbcc-c62c-4560-9840-5145108b184c 1839 0 2026-01-17 00:41:21 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.113 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.514 [INFO][3490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.721 [INFO][3504] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" HandleID="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Workload="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.721 [INFO][3504] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" HandleID="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Workload="10.0.0.113-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6770), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"test-pod-1", "timestamp":"2026-01-17 00:41:43.721227486 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.721 [INFO][3504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.721 [INFO][3504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.721 [INFO][3504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.769 [INFO][3504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.799 [INFO][3504] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.826 [INFO][3504] ipam/ipam.go 511: Trying affinity for 192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.844 [INFO][3504] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.862 [INFO][3504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.863 [INFO][3504] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.880 [INFO][3504] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.913 [INFO][3504] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.951 [INFO][3504] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.6.196/26] block=192.168.6.192/26 handle="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.951 [INFO][3504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.196/26] handle="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" host="10.0.0.113" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.951 [INFO][3504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.951 [INFO][3504] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.6.196/26] IPv6=[] ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" HandleID="k8s-pod-network.8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Workload="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.036845 containerd[1450]: 2026-01-17 00:41:43.960 [INFO][3490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5dabcbcc-c62c-4560-9840-5145108b184c", ResourceVersion:"1839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:44.037874 containerd[1450]: 2026-01-17 00:41:43.960 [INFO][3490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.196/32] ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.037874 containerd[1450]: 2026-01-17 00:41:43.960 [INFO][3490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.037874 containerd[1450]: 2026-01-17 00:41:43.966 [INFO][3490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.037874 containerd[1450]: 2026-01-17 00:41:43.974 [INFO][3490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5dabcbcc-c62c-4560-9840-5145108b184c", ResourceVersion:"1839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 41, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4e:1b:dc:41:68:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:41:44.037874 containerd[1450]: 2026-01-17 00:41:44.009 [INFO][3490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" Jan 17 00:41:44.128406 kubelet[1775]: E0117 00:41:44.127218 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:44.131931 containerd[1450]: time="2026-01-17T00:41:44.130862528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:44.131931 containerd[1450]: time="2026-01-17T00:41:44.131485162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:44.131931 containerd[1450]: time="2026-01-17T00:41:44.131529395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:44.136856 containerd[1450]: time="2026-01-17T00:41:44.135549104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:44.221105 systemd[1]: Started cri-containerd-8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af.scope - libcontainer container 8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af. Jan 17 00:41:44.273882 systemd-resolved[1373]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:41:44.354188 containerd[1450]: time="2026-01-17T00:41:44.353914868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5dabcbcc-c62c-4560-9840-5145108b184c,Namespace:default,Attempt:0,} returns sandbox id \"8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af\"" Jan 17 00:41:44.357070 containerd[1450]: time="2026-01-17T00:41:44.355950546Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:41:44.500317 containerd[1450]: time="2026-01-17T00:41:44.493136028Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:44.500317 containerd[1450]: time="2026-01-17T00:41:44.495484999Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:41:44.565293 containerd[1450]: time="2026-01-17T00:41:44.550285351Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 194.291846ms" Jan 17 00:41:44.565293 containerd[1450]: time="2026-01-17T00:41:44.550519340Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:41:44.585974 containerd[1450]: time="2026-01-17T00:41:44.585849102Z" level=info msg="CreateContainer within sandbox \"8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:41:44.679843 containerd[1450]: time="2026-01-17T00:41:44.679607393Z" level=info msg="CreateContainer within sandbox \"8931ecfb04539903645d41d3362271996e13acfce7975a610da2af2dd347e0af\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7e985214ebd44571022f79679cd5f15c9d992ee64c50edda97f55b02b6e88ec4\"" Jan 17 00:41:44.683474 containerd[1450]: time="2026-01-17T00:41:44.681420293Z" level=info msg="StartContainer for \"7e985214ebd44571022f79679cd5f15c9d992ee64c50edda97f55b02b6e88ec4\"" Jan 17 00:41:44.774339 systemd[1]: Started cri-containerd-7e985214ebd44571022f79679cd5f15c9d992ee64c50edda97f55b02b6e88ec4.scope - libcontainer container 7e985214ebd44571022f79679cd5f15c9d992ee64c50edda97f55b02b6e88ec4. Jan 17 00:41:44.853063 containerd[1450]: time="2026-01-17T00:41:44.850003380Z" level=info msg="StartContainer for \"7e985214ebd44571022f79679cd5f15c9d992ee64c50edda97f55b02b6e88ec4\" returns successfully" Jan 17 00:41:45.131088 kubelet[1775]: E0117 00:41:45.129483 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:45.956074 systemd-networkd[1372]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 00:41:46.168763 kubelet[1775]: E0117 00:41:46.150556 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:47.174010 kubelet[1775]: E0117 00:41:47.173525 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:48.176921 kubelet[1775]: E0117 00:41:48.175743 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:49.178150 kubelet[1775]: E0117 00:41:49.177746 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:49.390797 kubelet[1775]: E0117 00:41:49.390238 1775 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-t94cc" podUID="8d6ae2bf-19ec-4641-84d8-c96ea5455451" Jan 17 00:41:49.432674 kubelet[1775]: I0117 00:41:49.432205 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=28.224505989 podStartE2EDuration="28.432183422s" podCreationTimestamp="2026-01-17 00:41:21 +0000 UTC" firstStartedPulling="2026-01-17 00:41:44.355239922 +0000 UTC m=+107.689978716" lastFinishedPulling="2026-01-17 00:41:44.562917375 +0000 UTC m=+107.897656149" observedRunningTime="2026-01-17 00:41:45.595867356 +0000 UTC m=+108.930606169" watchObservedRunningTime="2026-01-17 00:41:49.432183422 +0000 UTC m=+112.766922195" Jan 17 00:41:50.195093 kubelet[1775]: E0117 00:41:50.194087 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:51.203052 kubelet[1775]: E0117 00:41:51.201506 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:41:52.203079 kubelet[1775]: E0117 00:41:52.203036 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"