Jan 23 18:57:40.993078 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:57:40.993104 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:40.993117 kernel: BIOS-provided physical RAM map: Jan 23 18:57:40.993123 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:57:40.993130 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 18:57:40.993140 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 18:57:40.993153 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 18:57:40.993160 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 18:57:40.993167 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 18:57:40.993175 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 18:57:40.993182 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 18:57:40.993189 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 18:57:40.993196 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 18:57:40.993203 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 18:57:40.993211 kernel: NX (Execute Disable) protection: active Jan 23 18:57:40.993223 kernel: APIC: Static calls initialized Jan 23 18:57:40.993230 kernel: efi: EFI v2.7 by Microsoft Jan 23 18:57:40.993237 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac2318 RNG=0x3ffd2018 Jan 23 18:57:40.993244 kernel: random: crng init done Jan 23 18:57:40.993253 kernel: secureboot: Secure boot disabled Jan 23 18:57:40.993260 kernel: SMBIOS 3.1.0 present. Jan 23 18:57:40.993269 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 18:57:40.993276 kernel: DMI: Memory slots populated: 2/2 Jan 23 18:57:40.993284 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 18:57:40.993292 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 18:57:40.993299 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 18:57:40.993309 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 18:57:40.993316 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 18:57:40.993324 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:57:40.993332 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:57:40.993340 kernel: tsc: Detected 2300.000 MHz processor Jan 23 18:57:40.993348 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:57:40.993357 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:57:40.993365 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 18:57:40.993373 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:57:40.993382 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:57:40.993392 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 18:57:40.993399 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 18:57:40.993407 kernel: Using GB pages for direct mapping Jan 23 18:57:40.993415 kernel: ACPI: Early table checksum verification disabled Jan 23 18:57:40.993426 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 18:57:40.993435 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993445 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993453 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 18:57:40.993461 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 18:57:40.993469 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993477 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993485 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993493 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:57:40.993504 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:57:40.993512 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:57:40.993520 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 18:57:40.993528 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 18:57:40.993536 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 18:57:40.993545 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 18:57:40.993553 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 18:57:40.993561 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 18:57:40.993569 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 18:57:40.993579 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 18:57:40.993587 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 18:57:40.993596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 18:57:40.993604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 18:57:40.993612 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 18:57:40.993620 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 18:57:40.993629 kernel: Zone ranges: Jan 23 18:57:40.993637 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:57:40.993645 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:57:40.993655 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:57:40.993663 kernel: Device empty Jan 23 18:57:40.993671 kernel: Movable zone start for each node Jan 23 18:57:40.993679 kernel: Early memory node ranges Jan 23 18:57:40.993688 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:57:40.993696 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 18:57:40.993704 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 18:57:40.993712 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 18:57:40.993720 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:57:40.993730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 18:57:40.993738 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:57:40.993746 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:57:40.993754 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 18:57:40.993763 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 18:57:40.993771 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 18:57:40.993779 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 18:57:40.993788 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:57:40.993796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:57:40.993806 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:57:40.993814 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 18:57:40.993822 kernel: TSC deadline timer available Jan 23 18:57:40.993831 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:57:40.993839 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:57:40.993847 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:57:40.993855 kernel: CPU topo: Max. threads per core: 2 Jan 23 18:57:40.993863 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:57:40.993871 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:57:40.993879 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:57:40.993889 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 18:57:40.993897 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 18:57:40.993905 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:57:40.993925 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:57:40.993942 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:57:40.993950 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:57:40.993958 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:57:40.993966 kernel: Hyper-V: PV spinlocks enabled Jan 23 18:57:40.993977 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:57:40.993987 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:40.993995 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 18:57:40.994003 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:57:40.994012 kernel: Fallback order for Node 0: 0 Jan 23 18:57:40.994020 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 18:57:40.994028 kernel: Policy zone: Normal Jan 23 18:57:40.994036 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:57:40.994044 kernel: software IO TLB: area num 2. Jan 23 18:57:40.994055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:57:40.994063 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:57:40.994070 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:57:40.994078 kernel: Dynamic Preempt: voluntary Jan 23 18:57:40.994086 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:57:40.994098 kernel: rcu: RCU event tracing is enabled. Jan 23 18:57:40.994116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:57:40.994127 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:57:40.994136 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:57:40.994147 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:57:40.994156 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:57:40.994167 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:57:40.994176 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:40.994186 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:40.994195 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:40.994205 kernel: Using NULL legacy PIC Jan 23 18:57:40.994215 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 18:57:40.994225 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:57:40.994234 kernel: Console: colour dummy device 80x25 Jan 23 18:57:40.994243 kernel: printk: legacy console [tty1] enabled Jan 23 18:57:40.994252 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:57:40.994262 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 18:57:40.994271 kernel: ACPI: Core revision 20240827 Jan 23 18:57:40.994280 kernel: Failed to register legacy timer interrupt Jan 23 18:57:40.994289 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:57:40.994300 kernel: x2apic enabled Jan 23 18:57:40.994309 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:57:40.994318 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 18:57:40.994327 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 18:57:40.994337 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 18:57:40.994346 kernel: Hyper-V: Using IPI hypercalls Jan 23 18:57:40.994356 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 18:57:40.994365 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 18:57:40.994374 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 18:57:40.994385 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 18:57:40.994395 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 18:57:40.994404 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 18:57:40.994413 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:57:40.994423 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 23 18:57:40.994433 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:57:40.994442 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:57:40.994451 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:57:40.994461 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:57:40.994471 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:57:40.994480 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:57:40.994489 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 18:57:40.994498 kernel: RETBleed: Vulnerable Jan 23 18:57:40.994507 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:57:40.994516 kernel: active return thunk: its_return_thunk Jan 23 18:57:40.994525 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:57:40.994534 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:57:40.994543 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:57:40.994552 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:57:40.994561 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:57:40.994572 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:57:40.994581 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:57:40.994590 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 18:57:40.994599 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 18:57:40.994608 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 18:57:40.994617 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:57:40.994627 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:57:40.994635 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:57:40.994645 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:57:40.994653 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 18:57:40.994663 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 18:57:40.994674 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 18:57:40.994683 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 18:57:40.994692 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:57:40.994701 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:57:40.994709 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:57:40.994718 kernel: landlock: Up and running. Jan 23 18:57:40.994727 kernel: SELinux: Initializing. Jan 23 18:57:40.994736 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:57:40.994745 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:57:40.994754 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 18:57:40.994763 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 18:57:40.994772 kernel: signal: max sigframe size: 11952 Jan 23 18:57:40.994783 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:57:40.994793 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:57:40.994802 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:57:40.994812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:57:40.994821 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:57:40.994830 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:57:40.994839 kernel: .... node #0, CPUs: #1 Jan 23 18:57:40.994848 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:57:40.994857 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 23 18:57:40.994869 kernel: Memory: 8068828K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 308184K reserved, 0K cma-reserved) Jan 23 18:57:40.994878 kernel: devtmpfs: initialized Jan 23 18:57:40.994887 kernel: x86/mm: Memory block size: 128MB Jan 23 18:57:40.994897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 18:57:40.994906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:57:40.994931 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:57:40.994941 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:57:40.994951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:57:40.994960 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:57:40.994971 kernel: audit: type=2000 audit(1769194657.072:1): state=initialized audit_enabled=0 res=1 Jan 23 18:57:40.994980 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:57:40.994989 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:57:40.994998 kernel: cpuidle: using governor menu Jan 23 18:57:40.995007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:57:40.995017 kernel: dca service started, version 1.12.1 Jan 23 18:57:40.995025 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 18:57:40.995035 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 18:57:40.995045 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:57:40.995055 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:57:40.995064 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:57:40.995073 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:57:40.995083 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:57:40.995092 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:57:40.995101 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:57:40.995110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:57:40.995119 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:57:40.995130 kernel: ACPI: Interpreter enabled Jan 23 18:57:40.995139 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:57:40.995148 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:57:40.995157 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:57:40.995166 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 18:57:40.995176 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 18:57:40.995185 kernel: iommu: Default domain type: Translated Jan 23 18:57:40.995194 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:57:40.995203 kernel: efivars: Registered efivars operations Jan 23 18:57:40.995214 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:57:40.995223 kernel: PCI: System does not support PCI Jan 23 18:57:40.995232 kernel: vgaarb: loaded Jan 23 18:57:40.995241 kernel: clocksource: Switched to clocksource tsc-early Jan 23 18:57:40.995250 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:57:40.995259 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:57:40.995268 kernel: pnp: PnP ACPI init Jan 23 18:57:40.995277 kernel: pnp: PnP ACPI: found 3 devices Jan 23 18:57:40.995286 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:57:40.995295 kernel: NET: Registered PF_INET protocol family Jan 23 18:57:40.995307 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 18:57:40.995316 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 18:57:40.995325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:57:40.995334 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:57:40.995343 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 18:57:40.995353 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 18:57:40.995362 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:57:40.995371 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:57:40.995382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:57:40.995391 kernel: NET: Registered PF_XDP protocol family Jan 23 18:57:40.995400 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:57:40.995410 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:57:40.995419 kernel: software IO TLB: mapped [mem 0x000000003a9a9000-0x000000003e9a9000] (64MB) Jan 23 18:57:40.995428 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 18:57:40.995437 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 18:57:40.995447 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 23 18:57:40.995456 kernel: clocksource: Switched to clocksource tsc Jan 23 18:57:40.995467 kernel: Initialise system trusted keyrings Jan 23 18:57:40.995476 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 18:57:40.995485 kernel: Key type asymmetric registered Jan 23 18:57:40.995494 kernel: Asymmetric key parser 'x509' registered Jan 23 18:57:40.995503 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:57:40.995512 kernel: io scheduler mq-deadline registered Jan 23 18:57:40.995522 kernel: io scheduler kyber registered Jan 23 18:57:40.995531 kernel: io scheduler bfq registered Jan 23 18:57:40.995540 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:57:40.995551 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:57:40.995560 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:57:40.995570 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 18:57:40.995579 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:57:40.995588 kernel: i8042: PNP: No PS/2 controller found. Jan 23 18:57:40.995727 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 18:57:40.995842 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T18:57:40 UTC (1769194660) Jan 23 18:57:40.995944 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 18:57:40.995958 kernel: intel_pstate: Intel P-state driver initializing Jan 23 18:57:40.995968 kernel: efifb: probing for efifb Jan 23 18:57:40.995977 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 18:57:40.995987 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 18:57:40.995996 kernel: efifb: scrolling: redraw Jan 23 18:57:40.996005 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:57:40.996014 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:57:40.996023 kernel: fb0: EFI VGA frame buffer device Jan 23 18:57:40.996032 kernel: pstore: Using crash dump compression: deflate Jan 23 18:57:40.996043 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:57:40.996052 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:57:40.996061 kernel: Segment Routing with IPv6 Jan 23 18:57:40.996071 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:57:40.996080 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:57:40.996089 kernel: Key type dns_resolver registered Jan 23 18:57:40.996098 kernel: IPI shorthand broadcast: enabled Jan 23 18:57:40.996107 kernel: sched_clock: Marking stable (3134005386, 97250507)->(3556812550, -325556657) Jan 23 18:57:40.996117 kernel: registered taskstats version 1 Jan 23 18:57:40.996128 kernel: Loading compiled-in X.509 certificates Jan 23 18:57:40.996137 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:57:40.996147 kernel: Demotion targets for Node 0: null Jan 23 18:57:40.996156 kernel: Key type .fscrypt registered Jan 23 18:57:40.996165 kernel: Key type fscrypt-provisioning registered Jan 23 18:57:40.996174 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:57:40.996183 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:57:40.996192 kernel: ima: No architecture policies found Jan 23 18:57:40.996201 kernel: clk: Disabling unused clocks Jan 23 18:57:40.996212 kernel: Warning: unable to open an initial console. Jan 23 18:57:40.996222 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:57:40.996231 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:57:40.996240 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:57:40.996249 kernel: Run /init as init process Jan 23 18:57:40.996259 kernel: with arguments: Jan 23 18:57:40.996267 kernel: /init Jan 23 18:57:40.996276 kernel: with environment: Jan 23 18:57:40.996285 kernel: HOME=/ Jan 23 18:57:40.996296 kernel: TERM=linux Jan 23 18:57:40.996306 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:57:40.996319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:57:40.996330 systemd[1]: Detected virtualization microsoft. Jan 23 18:57:40.996339 systemd[1]: Detected architecture x86-64. Jan 23 18:57:40.996349 systemd[1]: Running in initrd. Jan 23 18:57:40.996358 systemd[1]: No hostname configured, using default hostname. Jan 23 18:57:40.996370 systemd[1]: Hostname set to . Jan 23 18:57:40.996380 systemd[1]: Initializing machine ID from random generator. Jan 23 18:57:40.996389 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:57:40.996399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:57:40.996409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:57:40.996419 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:57:40.996429 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:57:40.996439 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:57:40.996452 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:57:40.996463 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:57:40.996473 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:57:40.996483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:57:40.996492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:57:40.996502 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:57:40.996511 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:57:40.996523 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:57:40.996533 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:57:40.996542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:57:40.996552 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:57:40.996561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:57:40.996571 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:57:40.996581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:57:40.996591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:57:40.996600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:57:40.996612 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:57:40.996622 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:57:40.996632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:57:40.996641 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:57:40.996652 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:57:40.996662 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:57:40.996671 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:57:40.996681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:57:40.996701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:40.996728 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 18:57:40.996756 systemd-journald[187]: Journal started Jan 23 18:57:40.996781 systemd-journald[187]: Runtime Journal (/run/log/journal/4403514dd314483ba64da86a737982c3) is 8M, max 158.6M, 150.6M free. Jan 23 18:57:41.000561 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:57:40.991973 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 18:57:41.009585 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:57:41.011278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:57:41.016879 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:57:41.023892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:57:41.024101 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:57:41.031051 kernel: Bridge firewalling registered Jan 23 18:57:41.028511 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 18:57:41.033814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:57:41.041395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:57:41.047965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:41.051208 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:57:41.057271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:57:41.067036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:57:41.068404 systemd-tmpfiles[197]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:57:41.071450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:57:41.075015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:41.091203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:57:41.095027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:57:41.095411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:41.098878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:57:41.106036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:57:41.122415 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:41.154161 systemd-resolved[223]: Positive Trust Anchors: Jan 23 18:57:41.155864 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:57:41.155902 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:57:41.175862 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 23 18:57:41.178956 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:57:41.181126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:41.201934 kernel: SCSI subsystem initialized Jan 23 18:57:41.210931 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:57:41.219940 kernel: iscsi: registered transport (tcp) Jan 23 18:57:41.238342 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:57:41.238396 kernel: QLogic iSCSI HBA Driver Jan 23 18:57:41.253410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:57:41.272859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:57:41.275414 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:57:41.310726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:57:41.314117 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:57:41.357944 kernel: raid6: avx512x4 gen() 44197 MB/s Jan 23 18:57:41.375928 kernel: raid6: avx512x2 gen() 43582 MB/s Jan 23 18:57:41.392926 kernel: raid6: avx512x1 gen() 25141 MB/s Jan 23 18:57:41.410930 kernel: raid6: avx2x4 gen() 35425 MB/s Jan 23 18:57:41.428926 kernel: raid6: avx2x2 gen() 36531 MB/s Jan 23 18:57:41.446746 kernel: raid6: avx2x1 gen() 27698 MB/s Jan 23 18:57:41.446766 kernel: raid6: using algorithm avx512x4 gen() 44197 MB/s Jan 23 18:57:41.465575 kernel: raid6: .... xor() 7076 MB/s, rmw enabled Jan 23 18:57:41.465598 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:57:41.484945 kernel: xor: automatically using best checksumming function avx Jan 23 18:57:41.612948 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:57:41.618062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:57:41.621061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:41.640779 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 18:57:41.644990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:41.648314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:57:41.672955 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Jan 23 18:57:41.693039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:57:41.696040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:57:41.731777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:57:41.739053 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:57:41.786935 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:57:41.808248 kernel: AES CTR mode by8 optimization enabled Jan 23 18:57:41.813048 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 18:57:41.812561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:57:41.812682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:41.819712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:41.837642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:41.857033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:57:41.857247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:41.868943 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 18:57:41.869103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:41.877156 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 18:57:41.885941 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:57:41.895598 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:57:41.895683 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc1c9f3 (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 18:57:41.895937 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:57:41.900067 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 18:57:41.900099 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 18:57:41.903985 kernel: scsi host0: storvsc_host_t Jan 23 18:57:41.904172 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 18:57:41.920130 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 18:57:41.927960 kernel: PTP clock support registered Jan 23 18:57:41.934125 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 18:57:41.934160 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 18:57:41.935393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:41.947944 kernel: hv_vmbus: registering driver hv_pci Jan 23 18:57:41.947979 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 18:57:41.951631 kernel: hv_vmbus: registering driver hv_utils Jan 23 18:57:41.953972 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 18:57:41.957029 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 18:57:41.957066 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 18:57:41.957218 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 18:57:42.337569 systemd-resolved[223]: Clock change detected. Flushing caches. Jan 23 18:57:42.343974 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 18:57:42.344152 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:57:42.344165 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 18:57:42.347103 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 18:57:42.347931 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:57:42.348532 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 18:57:42.352606 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 18:57:42.355508 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 18:57:42.370590 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:57:42.370841 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 18:57:42.388322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#33 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:57:42.388525 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 18:57:42.390393 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 18:57:42.411648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#70 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:57:42.549580 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 18:57:42.555584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:57:42.799510 kernel: nvme nvme0: using unchecked data buffer Jan 23 18:57:42.988888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 18:57:43.025971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:57:43.040896 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 18:57:43.051442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:57:43.051777 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:57:43.055445 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:57:43.311353 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 18:57:43.311583 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 18:57:43.314191 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 18:57:43.315908 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:57:43.320564 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 18:57:43.324630 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 18:57:43.329593 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 18:57:43.331637 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 18:57:43.350517 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:57:43.350722 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 18:57:43.350881 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 18:57:43.359061 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 18:57:43.367502 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 18:57:43.371212 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc1c9f3 eth0: VF registering: eth1 Jan 23 18:57:43.371393 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 18:57:43.375501 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 18:57:43.524444 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:57:43.528093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:57:43.531454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:57:43.534801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:57:43.538272 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:57:43.551508 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:57:44.089504 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:57:44.089625 disk-uuid[645]: The operation has completed successfully. Jan 23 18:57:44.153418 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:57:44.153536 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:57:44.188906 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:57:44.206808 sh[696]: Success Jan 23 18:57:44.236782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:57:44.236859 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:57:44.238722 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:57:44.248539 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:57:44.469294 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:57:44.479566 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:57:44.495798 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:57:44.510504 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (709) Jan 23 18:57:44.510549 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:57:44.512503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:44.822707 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:57:44.822808 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:57:44.823746 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:57:44.842374 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:57:44.845535 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:57:44.848928 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:57:44.849629 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:57:44.858051 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:57:44.887559 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (742) Jan 23 18:57:44.887614 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:44.891549 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:44.911983 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:57:44.912039 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:57:44.913670 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:57:44.920503 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:44.921395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:57:44.928625 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:57:44.952473 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:57:44.959059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:57:44.990548 systemd-networkd[878]: lo: Link UP Jan 23 18:57:44.990556 systemd-networkd[878]: lo: Gained carrier Jan 23 18:57:45.000152 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:57:45.000392 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:57:44.992005 systemd-networkd[878]: Enumeration completed Jan 23 18:57:44.992534 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:57:45.006578 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc1c9f3 eth0: Data path switched to VF: enP30832s1 Jan 23 18:57:44.993189 systemd[1]: Reached target network.target - Network. Jan 23 18:57:44.993791 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:44.993795 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:57:45.004691 systemd-networkd[878]: enP30832s1: Link UP Jan 23 18:57:45.004947 systemd-networkd[878]: eth0: Link UP Jan 23 18:57:45.005044 systemd-networkd[878]: eth0: Gained carrier Jan 23 18:57:45.005056 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:45.011136 systemd-networkd[878]: enP30832s1: Gained carrier Jan 23 18:57:45.020534 systemd-networkd[878]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:57:45.947964 ignition[837]: Ignition 2.22.0 Jan 23 18:57:45.947978 ignition[837]: Stage: fetch-offline Jan 23 18:57:45.948089 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:45.950234 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:57:45.948097 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:45.954614 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:57:45.948189 ignition[837]: parsed url from cmdline: "" Jan 23 18:57:45.948192 ignition[837]: no config URL provided Jan 23 18:57:45.948197 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:57:45.948203 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:57:45.948207 ignition[837]: failed to fetch config: resource requires networking Jan 23 18:57:45.948457 ignition[837]: Ignition finished successfully Jan 23 18:57:45.979849 ignition[888]: Ignition 2.22.0 Jan 23 18:57:45.979860 ignition[888]: Stage: fetch Jan 23 18:57:45.980101 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:45.980110 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:45.980206 ignition[888]: parsed url from cmdline: "" Jan 23 18:57:45.980209 ignition[888]: no config URL provided Jan 23 18:57:45.980219 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:57:45.980226 ignition[888]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:57:45.980247 ignition[888]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 18:57:46.091859 ignition[888]: GET result: OK Jan 23 18:57:46.091930 ignition[888]: config has been read from IMDS userdata Jan 23 18:57:46.091945 ignition[888]: parsing config with SHA512: 1009a09c7a41910ed0903775ab2dd4472049b67e9bf9f14b0ad9247d68210bb0a393dc93bd9029bd02b1500ac26ddb750dabf0cfc1c28a540aa8580fb3eb456d Jan 23 18:57:46.095272 unknown[888]: fetched base config from "system" Jan 23 18:57:46.095283 unknown[888]: fetched base config from "system" Jan 23 18:57:46.095574 ignition[888]: fetch: fetch complete Jan 23 18:57:46.095288 unknown[888]: fetched user config from "azure" Jan 23 18:57:46.095578 ignition[888]: fetch: fetch passed Jan 23 18:57:46.098719 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:57:46.095625 ignition[888]: Ignition finished successfully Jan 23 18:57:46.101321 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:57:46.135444 ignition[895]: Ignition 2.22.0 Jan 23 18:57:46.135456 ignition[895]: Stage: kargs Jan 23 18:57:46.138004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:57:46.135697 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:46.142653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:57:46.135705 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:46.136288 ignition[895]: kargs: kargs passed Jan 23 18:57:46.136328 ignition[895]: Ignition finished successfully Jan 23 18:57:46.169413 ignition[901]: Ignition 2.22.0 Jan 23 18:57:46.169424 ignition[901]: Stage: disks Jan 23 18:57:46.171895 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:57:46.169675 ignition[901]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:46.176106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:57:46.169684 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:46.180258 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:57:46.170700 ignition[901]: disks: disks passed Jan 23 18:57:46.185146 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:57:46.170735 ignition[901]: Ignition finished successfully Jan 23 18:57:46.188503 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:57:46.195093 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:57:46.199973 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:57:46.260628 systemd-fsck[909]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 18:57:46.270230 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:57:46.274560 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:57:46.553499 kernel: EXT4-fs (nvme0n1p9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:57:46.553988 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:57:46.555377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:57:46.570599 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:57:46.573389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:57:46.589615 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:57:46.592797 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:57:46.593091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:57:46.596262 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:57:46.598648 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:57:46.613532 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (918) Jan 23 18:57:46.616798 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:46.616830 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:46.622501 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:57:46.622536 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:57:46.622548 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:57:46.624822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:57:46.868635 systemd-networkd[878]: eth0: Gained IPv6LL Jan 23 18:57:47.069605 coreos-metadata[920]: Jan 23 18:57:47.069 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:57:47.074603 coreos-metadata[920]: Jan 23 18:57:47.074 INFO Fetch successful Jan 23 18:57:47.074603 coreos-metadata[920]: Jan 23 18:57:47.074 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:57:47.081643 coreos-metadata[920]: Jan 23 18:57:47.081 INFO Fetch successful Jan 23 18:57:47.096325 coreos-metadata[920]: Jan 23 18:57:47.096 INFO wrote hostname ci-4459.2.3-a-cd2f1addba to /sysroot/etc/hostname Jan 23 18:57:47.099942 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:57:47.272135 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:57:47.319854 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:57:47.325293 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:57:47.330276 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:57:48.203261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:57:48.207753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:57:48.221624 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:57:48.229982 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:57:48.232825 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:48.255320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:57:48.264433 ignition[1039]: INFO : Ignition 2.22.0 Jan 23 18:57:48.267833 ignition[1039]: INFO : Stage: mount Jan 23 18:57:48.267833 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:48.267833 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:48.267833 ignition[1039]: INFO : mount: mount passed Jan 23 18:57:48.267833 ignition[1039]: INFO : Ignition finished successfully Jan 23 18:57:48.267918 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:57:48.277320 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:57:48.296012 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:57:48.318958 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1050) Jan 23 18:57:48.319007 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:48.320162 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:48.325216 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:57:48.325254 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:57:48.326765 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:57:48.328809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:57:48.356969 ignition[1067]: INFO : Ignition 2.22.0 Jan 23 18:57:48.356969 ignition[1067]: INFO : Stage: files Jan 23 18:57:48.359518 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:48.359518 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:48.359518 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:57:48.371893 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:57:48.371893 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:57:48.402795 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:57:48.406590 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:57:48.406590 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:57:48.403166 unknown[1067]: wrote ssh authorized keys file for user: core Jan 23 18:57:48.433857 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:57:48.437579 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:57:48.446868 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:57:48.446868 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:57:48.446868 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:57:48.457369 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:57:48.457369 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:57:48.457369 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:57:48.930203 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 18:57:49.368409 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:57:49.371917 ignition[1067]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:57:49.371917 ignition[1067]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:57:49.371917 ignition[1067]: INFO : files: files passed Jan 23 18:57:49.371917 ignition[1067]: INFO : Ignition finished successfully Jan 23 18:57:49.373460 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:57:49.380970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:57:49.393668 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:57:49.397794 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:57:49.397904 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:57:49.425514 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:57:49.425514 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:57:49.431260 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:57:49.434925 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:57:49.440720 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:57:49.444607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:57:49.478156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:57:49.479526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:57:49.484261 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:57:49.488842 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:57:49.490462 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:57:49.492613 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:57:49.509989 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:57:49.514879 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:57:49.535055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:49.540669 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:57:49.542516 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:57:49.545792 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:57:49.545908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:57:49.549813 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:57:49.553634 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:57:49.557651 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:57:49.560314 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:57:49.564655 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:57:49.569637 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:57:49.574661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:57:49.578639 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:57:49.582385 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:57:49.587663 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:57:49.592810 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:57:49.603738 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:57:49.603915 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:57:49.604690 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:57:49.605273 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:57:49.605544 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:57:49.605851 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:57:49.605918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:57:49.606014 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:57:49.606804 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:57:49.606903 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:57:49.607160 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:57:49.607249 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:57:49.618713 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:57:49.618838 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:57:49.675575 ignition[1121]: INFO : Ignition 2.22.0 Jan 23 18:57:49.675575 ignition[1121]: INFO : Stage: umount Jan 23 18:57:49.675575 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:49.675575 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:57:49.675575 ignition[1121]: INFO : umount: umount passed Jan 23 18:57:49.675575 ignition[1121]: INFO : Ignition finished successfully Jan 23 18:57:49.621699 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:57:49.624543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:57:49.624741 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:57:49.627723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:57:49.628980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:57:49.629147 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:57:49.636035 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:57:49.636162 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:57:49.659340 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:57:49.659827 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:57:49.673198 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:57:49.673290 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:57:49.678863 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:57:49.678942 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:57:49.682615 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:57:49.682661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:57:49.683768 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:57:49.683799 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:57:49.684060 systemd[1]: Stopped target network.target - Network. Jan 23 18:57:49.684086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:57:49.684118 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:57:49.684552 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:57:49.692371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:57:49.692431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:57:49.700528 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:57:49.717022 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:57:49.722047 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:57:49.722083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:57:49.729592 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:57:49.729629 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:57:49.733555 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:57:49.733614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:57:49.737590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:57:49.737634 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:57:49.739405 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:57:49.742920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:57:49.748750 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:57:49.753175 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:57:49.753290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:57:49.759205 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:57:49.759406 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:57:49.759523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:57:49.766232 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:57:49.766783 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:57:49.770584 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:57:49.770623 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:57:49.775276 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:57:49.788591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:57:49.788646 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:57:49.788966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:57:49.788999 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:57:49.793455 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:57:49.793510 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:57:49.832591 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:57:49.832660 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:49.834146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:49.835761 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:57:49.835822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:57:49.866561 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc1c9f3 eth0: Data path switched from VF: enP30832s1 Jan 23 18:57:49.867268 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:57:49.869810 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:57:49.871299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:49.876814 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:57:49.876909 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:57:49.882079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:57:49.882151 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:57:49.886579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:57:49.886609 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:57:49.890657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:57:49.890716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:57:49.903578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:57:49.903645 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:57:49.906602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:57:49.906646 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:57:49.914300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:57:49.916584 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:57:49.916639 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:57:49.921181 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:57:49.921235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:49.927118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:57:49.927172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:49.936057 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:57:49.936117 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:57:49.936158 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:57:49.942756 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:57:49.942844 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:57:49.949066 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:57:49.949152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:57:49.954394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:57:49.955442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:57:49.955887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:57:49.959054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:57:49.989549 systemd[1]: Switching root. Jan 23 18:57:50.108926 systemd-journald[187]: Journal stopped Jan 23 18:57:54.214926 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 18:57:54.214967 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:57:54.214984 kernel: SELinux: policy capability open_perms=1 Jan 23 18:57:54.214993 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:57:54.215001 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:57:54.215009 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:57:54.215019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:57:54.215028 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:57:54.215038 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:57:54.215046 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:57:54.215055 kernel: audit: type=1403 audit(1769194671.122:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:57:54.215065 systemd[1]: Successfully loaded SELinux policy in 155.684ms. Jan 23 18:57:54.215075 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.355ms. Jan 23 18:57:54.215086 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:57:54.215099 systemd[1]: Detected virtualization microsoft. Jan 23 18:57:54.215108 systemd[1]: Detected architecture x86-64. Jan 23 18:57:54.215117 systemd[1]: Detected first boot. Jan 23 18:57:54.215127 systemd[1]: Hostname set to . Jan 23 18:57:54.215137 systemd[1]: Initializing machine ID from random generator. Jan 23 18:57:54.215147 zram_generator::config[1165]: No configuration found. Jan 23 18:57:54.215160 kernel: Guest personality initialized and is inactive Jan 23 18:57:54.215169 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 18:57:54.215179 kernel: Initialized host personality Jan 23 18:57:54.215189 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:57:54.215198 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:57:54.215209 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:57:54.215220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:57:54.215229 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:57:54.215241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:57:54.215251 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:57:54.215262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:57:54.215271 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:57:54.215281 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:57:54.215291 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:57:54.215302 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:57:54.215315 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:57:54.215325 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:57:54.215335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:57:54.215346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:57:54.215356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:57:54.215369 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:57:54.215379 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:57:54.215389 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:57:54.215401 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:57:54.215411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:57:54.215421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:57:54.215431 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:57:54.215442 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:57:54.215453 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:57:54.215463 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:57:54.215476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:57:54.215503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:57:54.215513 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:57:54.215523 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:57:54.215537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:57:54.215551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:57:54.215565 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:57:54.215576 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:57:54.215587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:57:54.215597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:57:54.215608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:57:54.215619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:57:54.215630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:57:54.215643 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:57:54.215654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:54.215665 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:57:54.215675 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:57:54.215686 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:57:54.215699 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:57:54.215708 systemd[1]: Reached target machines.target - Containers. Jan 23 18:57:54.215718 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:57:54.215727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:54.215739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:57:54.215748 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:57:54.215757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:57:54.215767 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:57:54.215777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:57:54.215786 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:57:54.215795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:57:54.215806 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:57:54.215818 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:57:54.215829 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:57:54.215839 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:57:54.215848 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:57:54.215860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:54.215872 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:57:54.215883 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:57:54.215893 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:57:54.215905 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:57:54.215916 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:57:54.215926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:57:54.215936 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:57:54.215947 kernel: loop: module loaded Jan 23 18:57:54.215958 systemd[1]: Stopped verity-setup.service. Jan 23 18:57:54.215995 systemd-journald[1248]: Collecting audit messages is disabled. Jan 23 18:57:54.216023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:54.216034 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:57:54.216046 systemd-journald[1248]: Journal started Jan 23 18:57:54.216071 systemd-journald[1248]: Runtime Journal (/run/log/journal/7e91af381dcb4146923ecda71da86f6d) is 8M, max 158.6M, 150.6M free. Jan 23 18:57:53.776324 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:57:53.784471 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 18:57:53.784988 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:57:54.222169 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:57:54.228696 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:57:54.231343 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:57:54.233980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:57:54.236820 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:57:54.239721 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:57:54.242420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:57:54.245993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:57:54.246262 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:57:54.249673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:57:54.249895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:57:54.253382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:57:54.255496 kernel: fuse: init (API version 7.41) Jan 23 18:57:54.253641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:57:54.256819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:57:54.257065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:57:54.260804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:57:54.263765 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:57:54.263939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:57:54.267114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:57:54.270365 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:57:54.273649 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:57:54.286783 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:57:54.294139 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:57:54.302012 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:57:54.305563 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:57:54.305598 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:57:54.308695 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:57:54.319901 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:57:54.322063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:54.324823 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:57:54.331603 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:57:54.334152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:57:54.335142 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:57:54.338609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:57:54.339782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:57:54.345661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:57:54.349681 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:57:54.352096 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:57:54.361589 kernel: ACPI: bus type drm_connector registered Jan 23 18:57:54.368199 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:57:54.368397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:57:54.374923 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:57:54.383614 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:57:54.391657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:57:54.401638 systemd-journald[1248]: Time spent on flushing to /var/log/journal/7e91af381dcb4146923ecda71da86f6d is 11.617ms for 973 entries. Jan 23 18:57:54.401638 systemd-journald[1248]: System Journal (/var/log/journal/7e91af381dcb4146923ecda71da86f6d) is 8M, max 2.6G, 2.6G free. Jan 23 18:57:54.430110 systemd-journald[1248]: Received client request to flush runtime journal. Jan 23 18:57:54.430147 kernel: loop0: detected capacity change from 0 to 229808 Jan 23 18:57:54.431218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:57:54.461505 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:57:54.469659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:57:54.475552 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:57:54.478452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:57:54.484618 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:57:54.501510 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 18:57:54.535231 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:57:54.555163 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:57:54.560616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:57:54.619213 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 18:57:54.619231 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 18:57:54.622452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:54.786331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:57:54.862526 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 18:57:54.989792 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:57:54.993880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:55.021631 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 23 18:57:55.177509 kernel: loop3: detected capacity change from 0 to 27936 Jan 23 18:57:55.208878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:55.215649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:57:55.270744 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:57:55.296533 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:57:55.347533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#19 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:57:55.396510 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:57:55.410305 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:57:55.426513 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 18:57:55.437505 kernel: hv_vmbus: registering driver hv_balloon Jan 23 18:57:55.442513 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 18:57:55.447562 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 18:57:55.447627 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 18:57:55.449984 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:57:55.455963 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:57:55.566498 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 18:57:55.583504 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 18:57:55.591406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:55.596333 systemd-networkd[1339]: lo: Link UP Jan 23 18:57:55.596345 systemd-networkd[1339]: lo: Gained carrier Jan 23 18:57:55.597936 systemd-networkd[1339]: Enumeration completed Jan 23 18:57:55.598023 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:57:55.599106 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:55.599110 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:57:55.601585 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:57:55.610367 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:57:55.607765 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:57:55.611348 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:57:55.616528 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 18:57:55.622437 systemd-networkd[1339]: enP30832s1: Link UP Jan 23 18:57:55.622535 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdc1c9f3 eth0: Data path switched to VF: enP30832s1 Jan 23 18:57:55.622549 systemd-networkd[1339]: eth0: Link UP Jan 23 18:57:55.622552 systemd-networkd[1339]: eth0: Gained carrier Jan 23 18:57:55.622592 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:55.627207 systemd-networkd[1339]: enP30832s1: Gained carrier Jan 23 18:57:55.639632 kernel: loop7: detected capacity change from 0 to 27936 Jan 23 18:57:55.645575 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:57:55.652453 (sd-merge)[1407]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 18:57:55.654552 (sd-merge)[1407]: Merged extensions into '/usr'. Jan 23 18:57:55.665246 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:57:55.665265 systemd[1]: Reloading... Jan 23 18:57:55.771577 zram_generator::config[1449]: No configuration found. Jan 23 18:57:55.842093 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 18:57:56.036934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:57:56.039310 systemd[1]: Reloading finished in 373 ms. Jan 23 18:57:56.061017 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:57:56.063049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:56.064770 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:57:56.100814 systemd[1]: Starting ensure-sysext.service... Jan 23 18:57:56.115926 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:57:56.130717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:57:56.133312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:57:56.133475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:56.135937 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:56.140693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:56.157704 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:57:56.159845 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:57:56.161115 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:57:56.174719 systemd[1]: Reload requested from client PID 1516 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:57:56.174820 systemd[1]: Reloading... Jan 23 18:57:56.176590 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:57:56.176859 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:57:56.177091 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:57:56.177799 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:57:56.178057 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 23 18:57:56.178111 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jan 23 18:57:56.187552 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:56.188461 systemd-tmpfiles[1518]: Skipping /boot Jan 23 18:57:56.197387 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:57:56.197471 systemd-tmpfiles[1518]: Skipping /boot Jan 23 18:57:56.252509 zram_generator::config[1557]: No configuration found. Jan 23 18:57:56.438449 systemd[1]: Reloading finished in 263 ms. Jan 23 18:57:56.463289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:56.470166 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:56.478783 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:56.489310 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:57:56.495719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:57:56.500780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:57:56.504080 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:57:56.510468 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.510718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:56.512514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:57:56.516942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:57:56.520776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:57:56.522976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:56.523109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:56.523215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.527075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.527237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:56.527390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:56.527864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:56.527983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.536316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:57:56.536541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:57:56.539193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:57:56.540036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:57:56.543180 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:57:56.543340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:57:56.559114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.559367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:57:56.561113 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:57:56.563247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:57:56.563289 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:57:56.563328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:57:56.563372 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:57:56.563410 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:57:56.567613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:57:56.568404 systemd[1]: Finished ensure-sysext.service. Jan 23 18:57:56.571106 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:57:56.582960 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:57:56.583132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:57:56.644711 augenrules[1653]: No rules Jan 23 18:57:56.645755 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:56.646806 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:56.654770 systemd-resolved[1621]: Positive Trust Anchors: Jan 23 18:57:56.654790 systemd-resolved[1621]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:57:56.654831 systemd-resolved[1621]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:57:56.656671 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:57:56.660420 systemd-resolved[1621]: Using system hostname 'ci-4459.2.3-a-cd2f1addba'. Jan 23 18:57:56.661936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:57:56.665804 systemd[1]: Reached target network.target - Network. Jan 23 18:57:56.667686 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:56.916715 systemd-networkd[1339]: eth0: Gained IPv6LL Jan 23 18:57:56.920154 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:57:56.923157 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:57:56.980037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:57:56.985939 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:57:59.451089 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:57:59.470698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:57:59.474103 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:57:59.507359 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:57:59.509922 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:57:59.511851 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:57:59.516643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:57:59.520033 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:57:59.522715 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:57:59.525606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:57:59.529675 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:57:59.533805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:57:59.533835 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:57:59.535998 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:57:59.553943 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:57:59.558379 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:57:59.565435 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:57:59.569703 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:57:59.572099 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:57:59.586040 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:57:59.589834 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:57:59.594213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:57:59.598326 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:57:59.600579 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:57:59.603104 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:59.603130 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:57:59.605538 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:57:59.611585 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:57:59.618742 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:57:59.625610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:57:59.632653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:57:59.638636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:57:59.643638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:57:59.645995 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:57:59.647170 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:57:59.651780 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 18:57:59.654669 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 18:57:59.657468 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 18:57:59.660882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:59.664680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:57:59.669833 jq[1673]: false Jan 23 18:57:59.670431 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:57:59.675770 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:57:59.679232 KVP[1678]: KVP starting; pid is:1678 Jan 23 18:57:59.686024 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:57:59.688934 KVP[1678]: KVP LIC Version: 3.1 Jan 23 18:57:59.690018 kernel: hv_utils: KVP IC version 4.0 Jan 23 18:57:59.694076 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:57:59.697255 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:57:59.699833 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:57:59.700972 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:57:59.707353 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:57:59.713894 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing passwd entry cache Jan 23 18:57:59.710254 oslogin_cache_refresh[1677]: Refreshing passwd entry cache Jan 23 18:57:59.720085 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:57:59.725094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:57:59.725790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:57:59.726728 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:57:59.726922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:57:59.735749 extend-filesystems[1674]: Found /dev/nvme0n1p6 Jan 23 18:57:59.738861 chronyd[1667]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:57:59.747990 jq[1690]: true Jan 23 18:57:59.756057 chronyd[1667]: Timezone right/UTC failed leap second check, ignoring Jan 23 18:57:59.756224 chronyd[1667]: Loaded seccomp filter (level 2) Jan 23 18:57:59.757339 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:57:59.763191 extend-filesystems[1674]: Found /dev/nvme0n1p9 Jan 23 18:57:59.767034 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting users, quitting Jan 23 18:57:59.767034 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:59.767034 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing group entry cache Jan 23 18:57:59.766909 oslogin_cache_refresh[1677]: Failure getting users, quitting Jan 23 18:57:59.766929 oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:57:59.766979 oslogin_cache_refresh[1677]: Refreshing group entry cache Jan 23 18:57:59.776514 extend-filesystems[1674]: Checking size of /dev/nvme0n1p9 Jan 23 18:57:59.781109 jq[1706]: true Jan 23 18:57:59.782162 update_engine[1689]: I20260123 18:57:59.782088 1689 main.cc:92] Flatcar Update Engine starting Jan 23 18:57:59.782754 (ntainerd)[1710]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:57:59.784539 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:57:59.784762 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:57:59.789157 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting groups, quitting Jan 23 18:57:59.789157 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:59.789149 oslogin_cache_refresh[1677]: Failure getting groups, quitting Jan 23 18:57:59.789162 oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:57:59.790949 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:57:59.791247 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:57:59.806502 extend-filesystems[1674]: Old size kept for /dev/nvme0n1p9 Jan 23 18:57:59.809835 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:57:59.810082 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:57:59.821244 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:57:59.871160 dbus-daemon[1670]: [system] SELinux support is enabled Jan 23 18:57:59.873457 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:57:59.876303 systemd-logind[1687]: New seat seat0. Jan 23 18:57:59.881874 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:57:59.881919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:57:59.885806 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:57:59.885832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:57:59.892951 systemd-logind[1687]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:57:59.893090 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:57:59.902958 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:57:59.905577 update_engine[1689]: I20260123 18:57:59.905393 1689 update_check_scheduler.cc:74] Next update check in 6m19s Jan 23 18:57:59.919552 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:57:59.944199 bash[1742]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:57:59.939392 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:57:59.943744 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:57:59.972550 coreos-metadata[1669]: Jan 23 18:57:59.971 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:57:59.979948 coreos-metadata[1669]: Jan 23 18:57:59.979 INFO Fetch successful Jan 23 18:57:59.979948 coreos-metadata[1669]: Jan 23 18:57:59.979 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 18:57:59.986057 coreos-metadata[1669]: Jan 23 18:57:59.985 INFO Fetch successful Jan 23 18:57:59.986057 coreos-metadata[1669]: Jan 23 18:57:59.986 INFO Fetching http://168.63.129.16/machine/d0308b45-46c7-46f8-8c91-7962091ea1c9/9c0398a8%2D4bda%2D42ba%2Da09e%2D6b5fe60d39e6.%5Fci%2D4459.2.3%2Da%2Dcd2f1addba?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 18:57:59.991241 sshd_keygen[1712]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:58:00.029991 coreos-metadata[1669]: Jan 23 18:58:00.029 INFO Fetch successful Jan 23 18:58:00.029991 coreos-metadata[1669]: Jan 23 18:58:00.029 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:58:00.039662 coreos-metadata[1669]: Jan 23 18:58:00.039 INFO Fetch successful Jan 23 18:58:00.045456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:58:00.064714 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:58:00.076173 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 18:58:00.112428 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:58:00.116824 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:58:00.132703 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:58:00.132939 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:58:00.138379 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:58:00.145982 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 18:58:00.162239 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:58:00.162885 locksmithd[1756]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:58:00.167115 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:58:00.173954 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:58:00.177713 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:58:00.863114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:00.870804 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:01.011128 containerd[1710]: time="2026-01-23T18:58:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:58:01.011128 containerd[1710]: time="2026-01-23T18:58:01.010140102Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021733649Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.791µs" Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021772942Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021793753Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021939983Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021952991Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.021974625Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022029153Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022039386Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022276648Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022289109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022310089Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022344 containerd[1710]: time="2026-01-23T18:58:01.022319659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022830 containerd[1710]: time="2026-01-23T18:58:01.022379918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022830 containerd[1710]: time="2026-01-23T18:58:01.022573616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022830 containerd[1710]: time="2026-01-23T18:58:01.022597712Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:01.022830 containerd[1710]: time="2026-01-23T18:58:01.022609450Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:58:01.022830 containerd[1710]: time="2026-01-23T18:58:01.022651962Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:58:01.023586 containerd[1710]: time="2026-01-23T18:58:01.022927645Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:58:01.023586 containerd[1710]: time="2026-01-23T18:58:01.023009579Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:58:01.037627 containerd[1710]: time="2026-01-23T18:58:01.037593002Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:58:01.037810 containerd[1710]: time="2026-01-23T18:58:01.037761781Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:58:01.037810 containerd[1710]: time="2026-01-23T18:58:01.037782649Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:58:01.037939 containerd[1710]: time="2026-01-23T18:58:01.037920040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:58:01.037993 containerd[1710]: time="2026-01-23T18:58:01.037983609Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:58:01.038033 containerd[1710]: time="2026-01-23T18:58:01.038025480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:58:01.038146 containerd[1710]: time="2026-01-23T18:58:01.038082875Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:58:01.038146 containerd[1710]: time="2026-01-23T18:58:01.038097998Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:58:01.038146 containerd[1710]: time="2026-01-23T18:58:01.038109751Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:58:01.038146 containerd[1710]: time="2026-01-23T18:58:01.038120378Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:58:01.038146 containerd[1710]: time="2026-01-23T18:58:01.038130058Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:58:01.038273 containerd[1710]: time="2026-01-23T18:58:01.038263900Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:58:01.038519 containerd[1710]: time="2026-01-23T18:58:01.038423587Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:58:01.038519 containerd[1710]: time="2026-01-23T18:58:01.038446328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:58:01.038519 containerd[1710]: time="2026-01-23T18:58:01.038461400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:58:01.038519 containerd[1710]: time="2026-01-23T18:58:01.038473460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:58:01.038609 containerd[1710]: time="2026-01-23T18:58:01.038538947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:58:01.038609 containerd[1710]: time="2026-01-23T18:58:01.038562434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:58:01.038609 containerd[1710]: time="2026-01-23T18:58:01.038575140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:58:01.038609 containerd[1710]: time="2026-01-23T18:58:01.038587219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:58:01.038609 containerd[1710]: time="2026-01-23T18:58:01.038599531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:58:01.038710 containerd[1710]: time="2026-01-23T18:58:01.038611002Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:58:01.038710 containerd[1710]: time="2026-01-23T18:58:01.038622279Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:58:01.038710 containerd[1710]: time="2026-01-23T18:58:01.038682880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:58:01.038710 containerd[1710]: time="2026-01-23T18:58:01.038697120Z" level=info msg="Start snapshots syncer" Jan 23 18:58:01.038856 containerd[1710]: time="2026-01-23T18:58:01.038722974Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:58:01.039010 containerd[1710]: time="2026-01-23T18:58:01.038970835Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:58:01.039187 containerd[1710]: time="2026-01-23T18:58:01.039023954Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:58:01.039187 containerd[1710]: time="2026-01-23T18:58:01.039077840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:58:01.039187 containerd[1710]: time="2026-01-23T18:58:01.039178907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:58:01.039253 containerd[1710]: time="2026-01-23T18:58:01.039200041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:58:01.039253 containerd[1710]: time="2026-01-23T18:58:01.039211882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:58:01.039253 containerd[1710]: time="2026-01-23T18:58:01.039224164Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:58:01.039314 containerd[1710]: time="2026-01-23T18:58:01.039250587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:58:01.039314 containerd[1710]: time="2026-01-23T18:58:01.039262705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:58:01.039314 containerd[1710]: time="2026-01-23T18:58:01.039273966Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:58:01.039314 containerd[1710]: time="2026-01-23T18:58:01.039296771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:58:01.039314 containerd[1710]: time="2026-01-23T18:58:01.039308696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039337048Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039366390Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039380694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039390259Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039399774Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039408380Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039418646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039435219Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039451160Z" level=info msg="runtime interface created" Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039456338Z" level=info msg="created NRI interface" Jan 23 18:58:01.039469 containerd[1710]: time="2026-01-23T18:58:01.039464918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:58:01.039831 containerd[1710]: time="2026-01-23T18:58:01.039477160Z" level=info msg="Connect containerd service" Jan 23 18:58:01.039896 containerd[1710]: time="2026-01-23T18:58:01.039878568Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:58:01.040963 containerd[1710]: time="2026-01-23T18:58:01.040722004Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:58:01.404942 kubelet[1812]: E0123 18:58:01.404872 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:01.408723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:01.408861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:01.409176 systemd[1]: kubelet.service: Consumed 973ms CPU time, 267.2M memory peak. Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.601951588Z" level=info msg="Start subscribing containerd event" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602025688Z" level=info msg="Start recovering state" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602049684Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602087013Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602146582Z" level=info msg="Start event monitor" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602164631Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602174509Z" level=info msg="Start streaming server" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602187893Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602196910Z" level=info msg="runtime interface starting up..." Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602203151Z" level=info msg="starting plugins..." Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602216178Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:58:01.603514 containerd[1710]: time="2026-01-23T18:58:01.602524045Z" level=info msg="containerd successfully booted in 0.594260s" Jan 23 18:58:01.602729 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:58:01.605257 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:58:01.608953 systemd[1]: Startup finished in 3.253s (kernel) + 9.882s (initrd) + 10.639s (userspace) = 23.774s. Jan 23 18:58:01.953873 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 18:58:01.956595 login[1803]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 18:58:01.967068 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:58:01.968740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:58:01.982380 systemd-logind[1687]: New session 1 of user core. Jan 23 18:58:01.987308 systemd-logind[1687]: New session 2 of user core. Jan 23 18:58:02.003009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:58:02.005242 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:58:02.033410 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:58:02.036161 systemd-logind[1687]: New session c1 of user core. Jan 23 18:58:02.039854 waagent[1796]: 2026-01-23T18:58:02.039785Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 18:58:02.042258 waagent[1796]: 2026-01-23T18:58:02.042192Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 23 18:58:02.045502 waagent[1796]: 2026-01-23T18:58:02.044597Z INFO Daemon Daemon Python: 3.11.13 Jan 23 18:58:02.046437 waagent[1796]: 2026-01-23T18:58:02.046385Z INFO Daemon Daemon Run daemon Jan 23 18:58:02.047999 waagent[1796]: 2026-01-23T18:58:02.047953Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 23 18:58:02.050062 waagent[1796]: 2026-01-23T18:58:02.049991Z INFO Daemon Daemon Using waagent for provisioning Jan 23 18:58:02.051845 waagent[1796]: 2026-01-23T18:58:02.051767Z INFO Daemon Daemon Activate resource disk Jan 23 18:58:02.053275 waagent[1796]: 2026-01-23T18:58:02.052717Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 18:58:02.056364 waagent[1796]: 2026-01-23T18:58:02.056324Z INFO Daemon Daemon Found device: None Jan 23 18:58:02.056609 waagent[1796]: 2026-01-23T18:58:02.056580Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 18:58:02.056798 waagent[1796]: 2026-01-23T18:58:02.056778Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 18:58:02.057542 waagent[1796]: 2026-01-23T18:58:02.057509Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:58:02.057955 waagent[1796]: 2026-01-23T18:58:02.057931Z INFO Daemon Daemon Running default provisioning handler Jan 23 18:58:02.065342 waagent[1796]: 2026-01-23T18:58:02.065301Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 18:58:02.071452 waagent[1796]: 2026-01-23T18:58:02.071211Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 18:58:02.074509 waagent[1796]: 2026-01-23T18:58:02.073627Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 18:58:02.074509 waagent[1796]: 2026-01-23T18:58:02.073752Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 18:58:02.138758 waagent[1796]: 2026-01-23T18:58:02.138676Z INFO Daemon Daemon Successfully mounted dvd Jan 23 18:58:02.165191 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 18:58:02.167797 waagent[1796]: 2026-01-23T18:58:02.166533Z INFO Daemon Daemon Detect protocol endpoint Jan 23 18:58:02.168112 waagent[1796]: 2026-01-23T18:58:02.168062Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:58:02.169523 waagent[1796]: 2026-01-23T18:58:02.169489Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 18:58:02.172233 waagent[1796]: 2026-01-23T18:58:02.170982Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 18:58:02.172233 waagent[1796]: 2026-01-23T18:58:02.171343Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 18:58:02.172233 waagent[1796]: 2026-01-23T18:58:02.171555Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 18:58:02.194083 waagent[1796]: 2026-01-23T18:58:02.194043Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 18:58:02.194999 waagent[1796]: 2026-01-23T18:58:02.194385Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 18:58:02.194999 waagent[1796]: 2026-01-23T18:58:02.194531Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 18:58:02.279616 waagent[1796]: 2026-01-23T18:58:02.279447Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 18:58:02.282284 waagent[1796]: 2026-01-23T18:58:02.282175Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 18:58:02.288599 waagent[1796]: 2026-01-23T18:58:02.288555Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:58:02.318608 waagent[1796]: 2026-01-23T18:58:02.318570Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Jan 23 18:58:02.321045 waagent[1796]: 2026-01-23T18:58:02.321009Z INFO Daemon Jan 23 18:58:02.322408 waagent[1796]: 2026-01-23T18:58:02.322322Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b6f83f4f-5ef4-4997-a883-63e249b683c5 eTag: 3081814676165879169 source: Fabric] Jan 23 18:58:02.325929 waagent[1796]: 2026-01-23T18:58:02.325895Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 18:58:02.330497 waagent[1796]: 2026-01-23T18:58:02.329582Z INFO Daemon Jan 23 18:58:02.330918 waagent[1796]: 2026-01-23T18:58:02.330835Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:58:02.340188 waagent[1796]: 2026-01-23T18:58:02.340156Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 18:58:02.348010 systemd[1842]: Queued start job for default target default.target. Jan 23 18:58:02.355156 systemd[1842]: Created slice app.slice - User Application Slice. Jan 23 18:58:02.355183 systemd[1842]: Reached target paths.target - Paths. Jan 23 18:58:02.355299 systemd[1842]: Reached target timers.target - Timers. Jan 23 18:58:02.356287 systemd[1842]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:58:02.365238 systemd[1842]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:58:02.365292 systemd[1842]: Reached target sockets.target - Sockets. Jan 23 18:58:02.365325 systemd[1842]: Reached target basic.target - Basic System. Jan 23 18:58:02.365553 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:58:02.365802 systemd[1842]: Reached target default.target - Main User Target. Jan 23 18:58:02.365840 systemd[1842]: Startup finished in 322ms. Jan 23 18:58:02.372624 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:58:02.373477 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:58:02.545550 waagent[1796]: 2026-01-23T18:58:02.545451Z INFO Daemon Downloaded certificate {'thumbprint': '51C6AF07E9D5D09059156D3EE347F6C1EAEB1622', 'hasPrivateKey': True} Jan 23 18:58:02.548356 waagent[1796]: 2026-01-23T18:58:02.548319Z INFO Daemon Fetch goal state completed Jan 23 18:58:02.608158 waagent[1796]: 2026-01-23T18:58:02.608063Z INFO Daemon Daemon Starting provisioning Jan 23 18:58:02.609421 waagent[1796]: 2026-01-23T18:58:02.609381Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 18:58:02.610563 waagent[1796]: 2026-01-23T18:58:02.610530Z INFO Daemon Daemon Set hostname [ci-4459.2.3-a-cd2f1addba] Jan 23 18:58:02.613766 waagent[1796]: 2026-01-23T18:58:02.613712Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-a-cd2f1addba] Jan 23 18:58:02.617016 waagent[1796]: 2026-01-23T18:58:02.614054Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 18:58:02.617016 waagent[1796]: 2026-01-23T18:58:02.614371Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 18:58:02.621786 systemd-networkd[1339]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:02.622035 systemd-networkd[1339]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:58:02.622068 systemd-networkd[1339]: eth0: DHCP lease lost Jan 23 18:58:02.622787 waagent[1796]: 2026-01-23T18:58:02.622738Z INFO Daemon Daemon Create user account if not exists Jan 23 18:58:02.622993 waagent[1796]: 2026-01-23T18:58:02.622961Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 18:58:02.623071 waagent[1796]: 2026-01-23T18:58:02.623034Z INFO Daemon Daemon Configure sudoer Jan 23 18:58:02.628509 waagent[1796]: 2026-01-23T18:58:02.627563Z INFO Daemon Daemon Configure sshd Jan 23 18:58:02.631082 waagent[1796]: 2026-01-23T18:58:02.630291Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 18:58:02.631082 waagent[1796]: 2026-01-23T18:58:02.630408Z INFO Daemon Daemon Deploy ssh public key. Jan 23 18:58:02.645524 systemd-networkd[1339]: eth0: DHCPv4 address 10.200.4.14/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:58:03.736756 waagent[1796]: 2026-01-23T18:58:03.736673Z INFO Daemon Daemon Provisioning complete Jan 23 18:58:03.748661 waagent[1796]: 2026-01-23T18:58:03.748602Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 18:58:03.751229 waagent[1796]: 2026-01-23T18:58:03.748858Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 18:58:03.751229 waagent[1796]: 2026-01-23T18:58:03.749001Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 18:58:03.857031 waagent[1890]: 2026-01-23T18:58:03.856950Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 18:58:03.857354 waagent[1890]: 2026-01-23T18:58:03.857075Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 23 18:58:03.857354 waagent[1890]: 2026-01-23T18:58:03.857117Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 18:58:03.857354 waagent[1890]: 2026-01-23T18:58:03.857157Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 18:58:03.877631 waagent[1890]: 2026-01-23T18:58:03.877569Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 18:58:03.877773 waagent[1890]: 2026-01-23T18:58:03.877743Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:58:03.877821 waagent[1890]: 2026-01-23T18:58:03.877801Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:58:03.886552 waagent[1890]: 2026-01-23T18:58:03.886473Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:58:03.892350 waagent[1890]: 2026-01-23T18:58:03.892312Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Jan 23 18:58:03.892725 waagent[1890]: 2026-01-23T18:58:03.892692Z INFO ExtHandler Jan 23 18:58:03.892773 waagent[1890]: 2026-01-23T18:58:03.892748Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: fdaf06c0-c811-49fb-b6be-d9ac79dfeff0 eTag: 3081814676165879169 source: Fabric] Jan 23 18:58:03.892965 waagent[1890]: 2026-01-23T18:58:03.892938Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:58:03.893314 waagent[1890]: 2026-01-23T18:58:03.893286Z INFO ExtHandler Jan 23 18:58:03.893351 waagent[1890]: 2026-01-23T18:58:03.893327Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:58:03.898397 waagent[1890]: 2026-01-23T18:58:03.898347Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:58:03.980425 waagent[1890]: 2026-01-23T18:58:03.980355Z INFO ExtHandler Downloaded certificate {'thumbprint': '51C6AF07E9D5D09059156D3EE347F6C1EAEB1622', 'hasPrivateKey': True} Jan 23 18:58:03.980873 waagent[1890]: 2026-01-23T18:58:03.980838Z INFO ExtHandler Fetch goal state completed Jan 23 18:58:03.992783 waagent[1890]: 2026-01-23T18:58:03.992682Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4-dev (Library: OpenSSL 3.4.4-dev ) Jan 23 18:58:03.997023 waagent[1890]: 2026-01-23T18:58:03.996972Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1890 Jan 23 18:58:03.997134 waagent[1890]: 2026-01-23T18:58:03.997105Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 18:58:03.997377 waagent[1890]: 2026-01-23T18:58:03.997355Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 18:58:03.998469 waagent[1890]: 2026-01-23T18:58:03.998433Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 18:58:03.998795 waagent[1890]: 2026-01-23T18:58:03.998764Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 18:58:03.998911 waagent[1890]: 2026-01-23T18:58:03.998886Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 18:58:03.999308 waagent[1890]: 2026-01-23T18:58:03.999281Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 18:58:04.033799 waagent[1890]: 2026-01-23T18:58:04.033760Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 18:58:04.033960 waagent[1890]: 2026-01-23T18:58:04.033937Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 18:58:04.039805 waagent[1890]: 2026-01-23T18:58:04.039774Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 18:58:04.045791 systemd[1]: Reload requested from client PID 1905 ('systemctl') (unit waagent.service)... Jan 23 18:58:04.046014 systemd[1]: Reloading... Jan 23 18:58:04.130511 zram_generator::config[1950]: No configuration found. Jan 23 18:58:04.215509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 23 18:58:04.315952 systemd[1]: Reloading finished in 269 ms. Jan 23 18:58:04.334507 waagent[1890]: 2026-01-23T18:58:04.332716Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 18:58:04.334507 waagent[1890]: 2026-01-23T18:58:04.332866Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 18:58:04.587241 waagent[1890]: 2026-01-23T18:58:04.587117Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 18:58:04.587496 waagent[1890]: 2026-01-23T18:58:04.587454Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 18:58:04.588275 waagent[1890]: 2026-01-23T18:58:04.588111Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 18:58:04.588275 waagent[1890]: 2026-01-23T18:58:04.588226Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:58:04.588466 waagent[1890]: 2026-01-23T18:58:04.588436Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:58:04.588678 waagent[1890]: 2026-01-23T18:58:04.588655Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 18:58:04.588935 waagent[1890]: 2026-01-23T18:58:04.588887Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 18:58:04.588993 waagent[1890]: 2026-01-23T18:58:04.588968Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 18:58:04.588993 waagent[1890]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 18:58:04.588993 waagent[1890]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 18:58:04.588993 waagent[1890]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 18:58:04.588993 waagent[1890]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:58:04.588993 waagent[1890]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:58:04.588993 waagent[1890]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:58:04.589447 waagent[1890]: 2026-01-23T18:58:04.589405Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 18:58:04.589573 waagent[1890]: 2026-01-23T18:58:04.589542Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:58:04.589636 waagent[1890]: 2026-01-23T18:58:04.589615Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:58:04.589752 waagent[1890]: 2026-01-23T18:58:04.589730Z INFO EnvHandler ExtHandler Configure routes Jan 23 18:58:04.589809 waagent[1890]: 2026-01-23T18:58:04.589778Z INFO EnvHandler ExtHandler Gateway:None Jan 23 18:58:04.589848 waagent[1890]: 2026-01-23T18:58:04.589830Z INFO EnvHandler ExtHandler Routes:None Jan 23 18:58:04.590131 waagent[1890]: 2026-01-23T18:58:04.590112Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 18:58:04.590367 waagent[1890]: 2026-01-23T18:58:04.590329Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 18:58:04.590446 waagent[1890]: 2026-01-23T18:58:04.590413Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 18:58:04.590909 waagent[1890]: 2026-01-23T18:58:04.590878Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 18:58:04.608830 waagent[1890]: 2026-01-23T18:58:04.608415Z INFO ExtHandler ExtHandler Jan 23 18:58:04.608830 waagent[1890]: 2026-01-23T18:58:04.608500Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6c4b29b9-b42e-4366-a928-f91036a831ea correlation 27b48efb-acae-42d2-8cf4-aee64217b2ed created: 2026-01-23T18:57:11.608604Z] Jan 23 18:58:04.610574 waagent[1890]: 2026-01-23T18:58:04.609538Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:58:04.610574 waagent[1890]: 2026-01-23T18:58:04.610114Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 18:58:04.637386 waagent[1890]: 2026-01-23T18:58:04.637339Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 18:58:04.637386 waagent[1890]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 18:58:04.637735 waagent[1890]: 2026-01-23T18:58:04.637705Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 87F4FAAF-CC2A-4256-B898-4D95E5AF7FA8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 18:58:04.654606 waagent[1890]: 2026-01-23T18:58:04.654559Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 18:58:04.654606 waagent[1890]: Executing ['ip', '-a', '-o', 'link']: Jan 23 18:58:04.654606 waagent[1890]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 18:58:04.654606 waagent[1890]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:c1:c9:f3 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jan 23 18:58:04.654606 waagent[1890]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:c1:c9:f3 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 18:58:04.654606 waagent[1890]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 18:58:04.654606 waagent[1890]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 18:58:04.654606 waagent[1890]: 2: eth0 inet 10.200.4.14/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 18:58:04.654606 waagent[1890]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 18:58:04.654606 waagent[1890]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 18:58:04.654606 waagent[1890]: 2: eth0 inet6 fe80::6245:bdff:fec1:c9f3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 18:58:04.690227 waagent[1890]: 2026-01-23T18:58:04.690174Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 18:58:04.690227 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:58:04.690227 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.690227 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:58:04.690227 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.690227 waagent[1890]: Chain OUTPUT (policy ACCEPT 2 packets, 482 bytes) Jan 23 18:58:04.690227 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.690227 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:58:04.690227 waagent[1890]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:58:04.690227 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:58:04.692943 waagent[1890]: 2026-01-23T18:58:04.692894Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 18:58:04.692943 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:58:04.692943 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.692943 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:58:04.692943 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.692943 waagent[1890]: Chain OUTPUT (policy ACCEPT 2 packets, 482 bytes) Jan 23 18:58:04.692943 waagent[1890]: pkts bytes target prot opt in out source destination Jan 23 18:58:04.692943 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:58:04.692943 waagent[1890]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:58:04.692943 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:58:11.542696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:58:11.544152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:12.086276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:12.095745 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:12.134696 kubelet[2042]: E0123 18:58:12.134636 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:12.139290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:12.139436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:12.139807 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.7M memory peak. Jan 23 18:58:22.292760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:58:22.294268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:22.732624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:22.738713 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:22.773786 kubelet[2057]: E0123 18:58:22.773623 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:22.775894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:22.776039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:22.776349 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.8M memory peak. Jan 23 18:58:23.540343 chronyd[1667]: Selected source PHC0 Jan 23 18:58:27.863394 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:58:27.864739 systemd[1]: Started sshd@0-10.200.4.14:22-10.200.16.10:35284.service - OpenSSH per-connection server daemon (10.200.16.10:35284). Jan 23 18:58:28.590682 sshd[2065]: Accepted publickey for core from 10.200.16.10 port 35284 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:28.591894 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:28.596457 systemd-logind[1687]: New session 3 of user core. Jan 23 18:58:28.603654 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:58:29.135370 systemd[1]: Started sshd@1-10.200.4.14:22-10.200.16.10:35294.service - OpenSSH per-connection server daemon (10.200.16.10:35294). Jan 23 18:58:29.757996 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 35294 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:29.759238 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:29.763637 systemd-logind[1687]: New session 4 of user core. Jan 23 18:58:29.769694 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:58:30.194389 sshd[2074]: Connection closed by 10.200.16.10 port 35294 Jan 23 18:58:30.195691 sshd-session[2071]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:30.199268 systemd-logind[1687]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:58:30.199729 systemd[1]: sshd@1-10.200.4.14:22-10.200.16.10:35294.service: Deactivated successfully. Jan 23 18:58:30.201638 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:58:30.203341 systemd-logind[1687]: Removed session 4. Jan 23 18:58:30.303690 systemd[1]: Started sshd@2-10.200.4.14:22-10.200.16.10:35062.service - OpenSSH per-connection server daemon (10.200.16.10:35062). Jan 23 18:58:30.926095 sshd[2080]: Accepted publickey for core from 10.200.16.10 port 35062 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:30.926807 sshd-session[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:30.931574 systemd-logind[1687]: New session 5 of user core. Jan 23 18:58:30.940656 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:58:31.359569 sshd[2083]: Connection closed by 10.200.16.10 port 35062 Jan 23 18:58:31.361113 sshd-session[2080]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:31.363985 systemd[1]: sshd@2-10.200.4.14:22-10.200.16.10:35062.service: Deactivated successfully. Jan 23 18:58:31.365682 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:58:31.367388 systemd-logind[1687]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:58:31.368607 systemd-logind[1687]: Removed session 5. Jan 23 18:58:31.474456 systemd[1]: Started sshd@3-10.200.4.14:22-10.200.16.10:35078.service - OpenSSH per-connection server daemon (10.200.16.10:35078). Jan 23 18:58:32.095740 sshd[2089]: Accepted publickey for core from 10.200.16.10 port 35078 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:32.097290 sshd-session[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:32.101633 systemd-logind[1687]: New session 6 of user core. Jan 23 18:58:32.109655 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:58:32.532212 sshd[2092]: Connection closed by 10.200.16.10 port 35078 Jan 23 18:58:32.532767 sshd-session[2089]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:32.536798 systemd-logind[1687]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:58:32.537357 systemd[1]: sshd@3-10.200.4.14:22-10.200.16.10:35078.service: Deactivated successfully. Jan 23 18:58:32.539072 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:58:32.540475 systemd-logind[1687]: Removed session 6. Jan 23 18:58:32.643070 systemd[1]: Started sshd@4-10.200.4.14:22-10.200.16.10:35094.service - OpenSSH per-connection server daemon (10.200.16.10:35094). Jan 23 18:58:32.792899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:58:32.794516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:33.265572 sshd[2098]: Accepted publickey for core from 10.200.16.10 port 35094 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:33.266595 sshd-session[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:33.266779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:33.271790 systemd-logind[1687]: New session 7 of user core. Jan 23 18:58:33.276776 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:33.277193 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:58:33.315057 kubelet[2109]: E0123 18:58:33.315007 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:33.317103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:33.317239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:33.317583 systemd[1]: kubelet.service: Consumed 137ms CPU time, 108.7M memory peak. Jan 23 18:58:33.743834 sudo[2117]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:58:33.744072 sudo[2117]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:33.774432 sudo[2117]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:33.873134 sshd[2111]: Connection closed by 10.200.16.10 port 35094 Jan 23 18:58:33.874732 sshd-session[2098]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:33.877655 systemd[1]: sshd@4-10.200.4.14:22-10.200.16.10:35094.service: Deactivated successfully. Jan 23 18:58:33.879586 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:58:33.880919 systemd-logind[1687]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:58:33.882523 systemd-logind[1687]: Removed session 7. Jan 23 18:58:33.983479 systemd[1]: Started sshd@5-10.200.4.14:22-10.200.16.10:35098.service - OpenSSH per-connection server daemon (10.200.16.10:35098). Jan 23 18:58:34.608119 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 35098 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:34.609359 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:34.613538 systemd-logind[1687]: New session 8 of user core. Jan 23 18:58:34.620632 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:58:34.947115 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:58:34.947341 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:34.955040 sudo[2128]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:34.959150 sudo[2127]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:58:34.959370 sudo[2127]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:34.967382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:58:34.996542 augenrules[2150]: No rules Jan 23 18:58:34.997558 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:58:34.997779 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:58:34.998693 sudo[2127]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:35.096861 sshd[2126]: Connection closed by 10.200.16.10 port 35098 Jan 23 18:58:35.097690 sshd-session[2123]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:35.100946 systemd[1]: sshd@5-10.200.4.14:22-10.200.16.10:35098.service: Deactivated successfully. Jan 23 18:58:35.103024 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:58:35.103532 systemd-logind[1687]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:58:35.105321 systemd-logind[1687]: Removed session 8. Jan 23 18:58:35.210431 systemd[1]: Started sshd@6-10.200.4.14:22-10.200.16.10:35110.service - OpenSSH per-connection server daemon (10.200.16.10:35110). Jan 23 18:58:35.829738 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 35110 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:58:35.830908 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:35.835578 systemd-logind[1687]: New session 9 of user core. Jan 23 18:58:35.841675 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:58:36.168419 sudo[2163]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:58:36.168665 sudo[2163]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:36.667398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:36.667572 systemd[1]: kubelet.service: Consumed 137ms CPU time, 108.7M memory peak. Jan 23 18:58:36.669933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:36.693192 systemd[1]: Reload requested from client PID 2198 ('systemctl') (unit session-9.scope)... Jan 23 18:58:36.693316 systemd[1]: Reloading... Jan 23 18:58:36.795563 zram_generator::config[2244]: No configuration found. Jan 23 18:58:36.988647 systemd[1]: Reloading finished in 294 ms. Jan 23 18:58:37.094691 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:58:37.094771 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:58:37.095041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:37.095385 systemd[1]: kubelet.service: Consumed 85ms CPU time, 83.2M memory peak. Jan 23 18:58:37.097058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:37.630703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:37.639326 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:58:37.673555 kubelet[2311]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:37.673810 kubelet[2311]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:58:37.673810 kubelet[2311]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:58:37.673882 kubelet[2311]: I0123 18:58:37.673863 2311 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:58:38.123520 kubelet[2311]: I0123 18:58:38.122753 2311 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:58:38.123520 kubelet[2311]: I0123 18:58:38.122782 2311 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:58:38.123520 kubelet[2311]: I0123 18:58:38.123155 2311 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:58:38.150440 kubelet[2311]: I0123 18:58:38.150409 2311 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:58:38.156194 kubelet[2311]: I0123 18:58:38.156171 2311 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:58:38.159403 kubelet[2311]: I0123 18:58:38.159377 2311 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:58:38.160368 kubelet[2311]: I0123 18:58:38.159592 2311 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:58:38.160368 kubelet[2311]: I0123 18:58:38.159629 2311 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:58:38.160368 kubelet[2311]: I0123 18:58:38.159814 2311 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:58:38.160368 kubelet[2311]: I0123 18:58:38.159825 2311 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:58:38.161304 kubelet[2311]: I0123 18:58:38.161281 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:38.164421 kubelet[2311]: I0123 18:58:38.164227 2311 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:58:38.164421 kubelet[2311]: I0123 18:58:38.164250 2311 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:58:38.164421 kubelet[2311]: I0123 18:58:38.164275 2311 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:58:38.164421 kubelet[2311]: I0123 18:58:38.164289 2311 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:58:38.168267 kubelet[2311]: E0123 18:58:38.168245 2311 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:38.168393 kubelet[2311]: E0123 18:58:38.168383 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:38.172894 kubelet[2311]: I0123 18:58:38.172857 2311 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:58:38.173263 kubelet[2311]: I0123 18:58:38.173248 2311 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:58:38.173850 kubelet[2311]: W0123 18:58:38.173836 2311 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:58:38.176266 kubelet[2311]: I0123 18:58:38.176244 2311 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:58:38.176343 kubelet[2311]: I0123 18:58:38.176293 2311 server.go:1289] "Started kubelet" Jan 23 18:58:38.176583 kubelet[2311]: I0123 18:58:38.176560 2311 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:58:38.177459 kubelet[2311]: I0123 18:58:38.177403 2311 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:58:38.180631 kubelet[2311]: I0123 18:58:38.180580 2311 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:58:38.181002 kubelet[2311]: I0123 18:58:38.180987 2311 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:58:38.183597 kubelet[2311]: I0123 18:58:38.183160 2311 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:58:38.184391 kubelet[2311]: I0123 18:58:38.184327 2311 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:58:38.189444 kubelet[2311]: I0123 18:58:38.189427 2311 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:58:38.189856 kubelet[2311]: E0123 18:58:38.189841 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.190436 kubelet[2311]: I0123 18:58:38.190422 2311 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:58:38.191734 kubelet[2311]: I0123 18:58:38.191720 2311 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:58:38.193655 kubelet[2311]: E0123 18:58:38.193630 2311 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:58:38.194310 kubelet[2311]: E0123 18:58:38.194283 2311 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.4.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 23 18:58:38.194562 kubelet[2311]: I0123 18:58:38.194552 2311 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:58:38.194622 kubelet[2311]: I0123 18:58:38.194616 2311 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:58:38.194752 kubelet[2311]: I0123 18:58:38.194735 2311 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:58:38.196357 kubelet[2311]: E0123 18:58:38.195067 2311 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.4.14.188d7138fb64ba8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.4.14,UID:10.200.4.14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.4.14,},FirstTimestamp:2026-01-23 18:58:38.176262797 +0000 UTC m=+0.533248702,LastTimestamp:2026-01-23 18:58:38.176262797 +0000 UTC m=+0.533248702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.4.14,}" Jan 23 18:58:38.196839 kubelet[2311]: E0123 18:58:38.196812 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.200.4.14\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:58:38.196922 kubelet[2311]: E0123 18:58:38.196890 2311 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:58:38.205056 kubelet[2311]: I0123 18:58:38.205045 2311 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:58:38.205331 kubelet[2311]: I0123 18:58:38.205154 2311 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:58:38.205331 kubelet[2311]: I0123 18:58:38.205176 2311 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:58:38.210466 kubelet[2311]: I0123 18:58:38.210449 2311 policy_none.go:49] "None policy: Start" Jan 23 18:58:38.210573 kubelet[2311]: I0123 18:58:38.210527 2311 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:58:38.210573 kubelet[2311]: I0123 18:58:38.210539 2311 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:58:38.218049 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:58:38.229565 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:58:38.232494 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:58:38.240038 kubelet[2311]: E0123 18:58:38.240018 2311 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:58:38.240196 kubelet[2311]: I0123 18:58:38.240175 2311 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:58:38.240228 kubelet[2311]: I0123 18:58:38.240187 2311 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:58:38.240728 kubelet[2311]: I0123 18:58:38.240713 2311 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:58:38.244374 kubelet[2311]: E0123 18:58:38.244267 2311 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:58:38.244374 kubelet[2311]: E0123 18:58:38.244306 2311 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.14\" not found" Jan 23 18:58:38.247077 kubelet[2311]: I0123 18:58:38.247045 2311 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:58:38.248405 kubelet[2311]: I0123 18:58:38.248378 2311 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:58:38.248469 kubelet[2311]: I0123 18:58:38.248411 2311 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:58:38.248469 kubelet[2311]: I0123 18:58:38.248429 2311 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:58:38.248469 kubelet[2311]: I0123 18:58:38.248436 2311 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:58:38.248606 kubelet[2311]: E0123 18:58:38.248585 2311 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 18:58:38.342612 kubelet[2311]: I0123 18:58:38.342095 2311 kubelet_node_status.go:75] "Attempting to register node" node="10.200.4.14" Jan 23 18:58:38.353063 kubelet[2311]: I0123 18:58:38.353037 2311 kubelet_node_status.go:78] "Successfully registered node" node="10.200.4.14" Jan 23 18:58:38.353063 kubelet[2311]: E0123 18:58:38.353067 2311 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.4.14\": node \"10.200.4.14\" not found" Jan 23 18:58:38.379413 kubelet[2311]: I0123 18:58:38.379291 2311 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 18:58:38.379944 containerd[1710]: time="2026-01-23T18:58:38.379717714Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:58:38.380678 kubelet[2311]: I0123 18:58:38.380584 2311 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 18:58:38.397385 kubelet[2311]: E0123 18:58:38.397358 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.498147 kubelet[2311]: E0123 18:58:38.498102 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.598836 kubelet[2311]: E0123 18:58:38.598771 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.663281 sudo[2163]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:38.699121 kubelet[2311]: E0123 18:58:38.699070 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.761362 sshd[2162]: Connection closed by 10.200.16.10 port 35110 Jan 23 18:58:38.763195 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:38.766352 systemd[1]: sshd@6-10.200.4.14:22-10.200.16.10:35110.service: Deactivated successfully. Jan 23 18:58:38.768220 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:58:38.768463 systemd[1]: session-9.scope: Consumed 430ms CPU time, 74M memory peak. Jan 23 18:58:38.770720 systemd-logind[1687]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:58:38.771817 systemd-logind[1687]: Removed session 9. Jan 23 18:58:38.799781 kubelet[2311]: E0123 18:58:38.799742 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:38.900805 kubelet[2311]: E0123 18:58:38.900754 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.001494 kubelet[2311]: E0123 18:58:39.001343 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.101978 kubelet[2311]: E0123 18:58:39.101919 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.125116 kubelet[2311]: I0123 18:58:39.125086 2311 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 18:58:39.125386 kubelet[2311]: I0123 18:58:39.125317 2311 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:58:39.125386 kubelet[2311]: I0123 18:58:39.125374 2311 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 18:58:39.169727 kubelet[2311]: E0123 18:58:39.169656 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:39.202535 kubelet[2311]: E0123 18:58:39.202473 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.302656 kubelet[2311]: E0123 18:58:39.302628 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.403402 kubelet[2311]: E0123 18:58:39.403353 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.504048 kubelet[2311]: E0123 18:58:39.504002 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:39.604648 kubelet[2311]: E0123 18:58:39.604523 2311 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.4.14\" not found" Jan 23 18:58:40.170684 kubelet[2311]: I0123 18:58:40.170631 2311 apiserver.go:52] "Watching apiserver" Jan 23 18:58:40.170684 kubelet[2311]: E0123 18:58:40.170649 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:40.184756 kubelet[2311]: E0123 18:58:40.184652 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:40.190666 systemd[1]: Created slice kubepods-besteffort-pod08a7b0f0_0afa_4a0a_bf4d_a2b352b00169.slice - libcontainer container kubepods-besteffort-pod08a7b0f0_0afa_4a0a_bf4d_a2b352b00169.slice. Jan 23 18:58:40.198285 kubelet[2311]: I0123 18:58:40.197567 2311 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:58:40.202293 systemd[1]: Created slice kubepods-besteffort-podb7eace1d_4548_4946_a18a_616580bbd434.slice - libcontainer container kubepods-besteffort-podb7eace1d_4548_4946_a18a_616580bbd434.slice. Jan 23 18:58:40.202915 kubelet[2311]: I0123 18:58:40.202887 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b33e19f-e45d-4033-97dc-1891d2a04f54-registration-dir\") pod \"csi-node-driver-j8n9m\" (UID: \"2b33e19f-e45d-4033-97dc-1891d2a04f54\") " pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:40.202976 kubelet[2311]: I0123 18:58:40.202916 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-cni-net-dir\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.202976 kubelet[2311]: I0123 18:58:40.202937 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-policysync\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.202976 kubelet[2311]: I0123 18:58:40.202953 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-var-lib-calico\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.202976 kubelet[2311]: I0123 18:58:40.202972 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b33e19f-e45d-4033-97dc-1891d2a04f54-kubelet-dir\") pod \"csi-node-driver-j8n9m\" (UID: \"2b33e19f-e45d-4033-97dc-1891d2a04f54\") " pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:40.203072 kubelet[2311]: I0123 18:58:40.202988 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b33e19f-e45d-4033-97dc-1891d2a04f54-socket-dir\") pod \"csi-node-driver-j8n9m\" (UID: \"2b33e19f-e45d-4033-97dc-1891d2a04f54\") " pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:40.203072 kubelet[2311]: I0123 18:58:40.203006 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-cni-bin-dir\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203072 kubelet[2311]: I0123 18:58:40.203022 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-tigera-ca-bundle\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203072 kubelet[2311]: I0123 18:58:40.203054 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-xtables-lock\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203179 kubelet[2311]: I0123 18:58:40.203072 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7eace1d-4548-4946-a18a-616580bbd434-kube-proxy\") pod \"kube-proxy-kwcmx\" (UID: \"b7eace1d-4548-4946-a18a-616580bbd434\") " pod="kube-system/kube-proxy-kwcmx" Jan 23 18:58:40.203179 kubelet[2311]: I0123 18:58:40.203090 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-cni-log-dir\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203179 kubelet[2311]: I0123 18:58:40.203107 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-lib-modules\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203179 kubelet[2311]: I0123 18:58:40.203122 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-node-certs\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203179 kubelet[2311]: I0123 18:58:40.203138 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpq9\" (UniqueName: \"kubernetes.io/projected/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-kube-api-access-6vpq9\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203290 kubelet[2311]: I0123 18:58:40.203157 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2b33e19f-e45d-4033-97dc-1891d2a04f54-varrun\") pod \"csi-node-driver-j8n9m\" (UID: \"2b33e19f-e45d-4033-97dc-1891d2a04f54\") " pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:40.203290 kubelet[2311]: I0123 18:58:40.203173 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-flexvol-driver-host\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203290 kubelet[2311]: I0123 18:58:40.203192 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08a7b0f0-0afa-4a0a-bf4d-a2b352b00169-var-run-calico\") pod \"calico-node-kskcv\" (UID: \"08a7b0f0-0afa-4a0a-bf4d-a2b352b00169\") " pod="calico-system/calico-node-kskcv" Jan 23 18:58:40.203290 kubelet[2311]: I0123 18:58:40.203212 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56gf5\" (UniqueName: \"kubernetes.io/projected/2b33e19f-e45d-4033-97dc-1891d2a04f54-kube-api-access-56gf5\") pod \"csi-node-driver-j8n9m\" (UID: \"2b33e19f-e45d-4033-97dc-1891d2a04f54\") " pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:40.203290 kubelet[2311]: I0123 18:58:40.203240 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7eace1d-4548-4946-a18a-616580bbd434-xtables-lock\") pod \"kube-proxy-kwcmx\" (UID: \"b7eace1d-4548-4946-a18a-616580bbd434\") " pod="kube-system/kube-proxy-kwcmx" Jan 23 18:58:40.203392 kubelet[2311]: I0123 18:58:40.203256 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7eace1d-4548-4946-a18a-616580bbd434-lib-modules\") pod \"kube-proxy-kwcmx\" (UID: \"b7eace1d-4548-4946-a18a-616580bbd434\") " pod="kube-system/kube-proxy-kwcmx" Jan 23 18:58:40.203392 kubelet[2311]: I0123 18:58:40.203275 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jdp\" (UniqueName: \"kubernetes.io/projected/b7eace1d-4548-4946-a18a-616580bbd434-kube-api-access-96jdp\") pod \"kube-proxy-kwcmx\" (UID: \"b7eace1d-4548-4946-a18a-616580bbd434\") " pod="kube-system/kube-proxy-kwcmx" Jan 23 18:58:40.308082 kubelet[2311]: E0123 18:58:40.307987 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:40.308082 kubelet[2311]: W0123 18:58:40.308010 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:40.308082 kubelet[2311]: E0123 18:58:40.308036 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:40.318663 kubelet[2311]: E0123 18:58:40.318612 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:40.318663 kubelet[2311]: W0123 18:58:40.318632 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:40.318883 kubelet[2311]: E0123 18:58:40.318815 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:40.319646 kubelet[2311]: E0123 18:58:40.319605 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:40.319646 kubelet[2311]: W0123 18:58:40.319618 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:40.321452 kubelet[2311]: E0123 18:58:40.321397 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:40.327162 kubelet[2311]: E0123 18:58:40.327112 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:40.327162 kubelet[2311]: W0123 18:58:40.327142 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:40.327273 kubelet[2311]: E0123 18:58:40.327204 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:40.500925 containerd[1710]: time="2026-01-23T18:58:40.500246991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kskcv,Uid:08a7b0f0-0afa-4a0a-bf4d-a2b352b00169,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:40.505429 containerd[1710]: time="2026-01-23T18:58:40.505396738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kwcmx,Uid:b7eace1d-4548-4946-a18a-616580bbd434,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:41.089739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290029227.mount: Deactivated successfully. Jan 23 18:58:41.115841 containerd[1710]: time="2026-01-23T18:58:41.115792465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:41.121008 containerd[1710]: time="2026-01-23T18:58:41.120924898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 18:58:41.123317 containerd[1710]: time="2026-01-23T18:58:41.123287881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:41.125913 containerd[1710]: time="2026-01-23T18:58:41.125879365Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:41.131079 containerd[1710]: time="2026-01-23T18:58:41.130746135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:58:41.133684 containerd[1710]: time="2026-01-23T18:58:41.133660948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:58:41.134182 containerd[1710]: time="2026-01-23T18:58:41.134160430Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 620.816807ms" Jan 23 18:58:41.135272 containerd[1710]: time="2026-01-23T18:58:41.135236928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 625.982043ms" Jan 23 18:58:41.170990 kubelet[2311]: E0123 18:58:41.170934 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:41.176752 containerd[1710]: time="2026-01-23T18:58:41.176677515Z" level=info msg="connecting to shim 805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91" address="unix:///run/containerd/s/1c66087e0b1d4e72255bde732f1637591321b56636699985beb411df0ebd5e4f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:41.184793 containerd[1710]: time="2026-01-23T18:58:41.184751015Z" level=info msg="connecting to shim 0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef" address="unix:///run/containerd/s/93bddb7da15774ebf7f6c60ee9fbd05fb8d3bb4be28764de083fe64ef2c7c3f6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:41.204693 systemd[1]: Started cri-containerd-805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91.scope - libcontainer container 805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91. Jan 23 18:58:41.219658 systemd[1]: Started cri-containerd-0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef.scope - libcontainer container 0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef. Jan 23 18:58:41.245014 containerd[1710]: time="2026-01-23T18:58:41.244927448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kwcmx,Uid:b7eace1d-4548-4946-a18a-616580bbd434,Namespace:kube-system,Attempt:0,} returns sandbox id \"805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91\"" Jan 23 18:58:41.247453 containerd[1710]: time="2026-01-23T18:58:41.247367752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:58:41.259961 containerd[1710]: time="2026-01-23T18:58:41.259931696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kskcv,Uid:08a7b0f0-0afa-4a0a-bf4d-a2b352b00169,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\"" Jan 23 18:58:42.171064 kubelet[2311]: E0123 18:58:42.171027 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:42.249793 kubelet[2311]: E0123 18:58:42.249359 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:43.172246 kubelet[2311]: E0123 18:58:43.172184 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:43.554467 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 18:58:44.172929 kubelet[2311]: E0123 18:58:44.172871 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:44.250519 kubelet[2311]: E0123 18:58:44.250440 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:44.837806 update_engine[1689]: I20260123 18:58:44.837740 1689 update_attempter.cc:509] Updating boot flags... Jan 23 18:58:45.174115 kubelet[2311]: E0123 18:58:45.173973 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:46.174183 kubelet[2311]: E0123 18:58:46.174106 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:46.250036 kubelet[2311]: E0123 18:58:46.249989 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:47.174386 kubelet[2311]: E0123 18:58:47.174346 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:48.175013 kubelet[2311]: E0123 18:58:48.174976 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:48.249975 kubelet[2311]: E0123 18:58:48.249440 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:48.968313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378593025.mount: Deactivated successfully. Jan 23 18:58:49.175890 kubelet[2311]: E0123 18:58:49.175851 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:49.356817 containerd[1710]: time="2026-01-23T18:58:49.356766531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.360533 containerd[1710]: time="2026-01-23T18:58:49.360507447Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 23 18:58:49.363731 containerd[1710]: time="2026-01-23T18:58:49.363574518Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.366764 containerd[1710]: time="2026-01-23T18:58:49.366735623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.367127 containerd[1710]: time="2026-01-23T18:58:49.367101027Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 8.119701832s" Jan 23 18:58:49.367168 containerd[1710]: time="2026-01-23T18:58:49.367136994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:58:49.368254 containerd[1710]: time="2026-01-23T18:58:49.368086368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:58:49.373987 containerd[1710]: time="2026-01-23T18:58:49.373955871Z" level=info msg="CreateContainer within sandbox \"805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:58:49.391938 containerd[1710]: time="2026-01-23T18:58:49.391903426Z" level=info msg="Container 7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:49.409433 containerd[1710]: time="2026-01-23T18:58:49.409398221Z" level=info msg="CreateContainer within sandbox \"805d5ebf7bb8fe6e144ab070bb7378b3630d6d5d141112969e46deeabf6abf91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205\"" Jan 23 18:58:49.411569 containerd[1710]: time="2026-01-23T18:58:49.410725550Z" level=info msg="StartContainer for \"7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205\"" Jan 23 18:58:49.413680 containerd[1710]: time="2026-01-23T18:58:49.413646305Z" level=info msg="connecting to shim 7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205" address="unix:///run/containerd/s/1c66087e0b1d4e72255bde732f1637591321b56636699985beb411df0ebd5e4f" protocol=ttrpc version=3 Jan 23 18:58:49.437633 systemd[1]: Started cri-containerd-7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205.scope - libcontainer container 7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205. Jan 23 18:58:49.511688 containerd[1710]: time="2026-01-23T18:58:49.511633451Z" level=info msg="StartContainer for \"7bc6270f82bbf93ecab8e404e5e3f3634204b3a2a4bf54f72e33f2e3337fe205\" returns successfully" Jan 23 18:58:50.176376 kubelet[2311]: E0123 18:58:50.176283 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:50.249691 kubelet[2311]: E0123 18:58:50.249646 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:50.281681 kubelet[2311]: I0123 18:58:50.281591 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kwcmx" podStartSLOduration=4.160266734 podStartE2EDuration="12.281578948s" podCreationTimestamp="2026-01-23 18:58:38 +0000 UTC" firstStartedPulling="2026-01-23 18:58:41.246692766 +0000 UTC m=+3.603678666" lastFinishedPulling="2026-01-23 18:58:49.368004974 +0000 UTC m=+11.724990880" observedRunningTime="2026-01-23 18:58:50.281325141 +0000 UTC m=+12.638311054" watchObservedRunningTime="2026-01-23 18:58:50.281578948 +0000 UTC m=+12.638564853" Jan 23 18:58:50.365961 kubelet[2311]: E0123 18:58:50.365917 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.365961 kubelet[2311]: W0123 18:58:50.365952 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366155 kubelet[2311]: E0123 18:58:50.365976 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366155 kubelet[2311]: E0123 18:58:50.366101 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366155 kubelet[2311]: W0123 18:58:50.366107 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366155 kubelet[2311]: E0123 18:58:50.366114 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366266 kubelet[2311]: E0123 18:58:50.366207 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366266 kubelet[2311]: W0123 18:58:50.366213 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366266 kubelet[2311]: E0123 18:58:50.366219 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366359 kubelet[2311]: E0123 18:58:50.366345 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366359 kubelet[2311]: W0123 18:58:50.366357 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366410 kubelet[2311]: E0123 18:58:50.366364 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366518 kubelet[2311]: E0123 18:58:50.366502 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366518 kubelet[2311]: W0123 18:58:50.366513 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366603 kubelet[2311]: E0123 18:58:50.366522 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366639 kubelet[2311]: E0123 18:58:50.366633 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366639 kubelet[2311]: W0123 18:58:50.366638 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366706 kubelet[2311]: E0123 18:58:50.366645 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366754 kubelet[2311]: E0123 18:58:50.366734 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366754 kubelet[2311]: W0123 18:58:50.366742 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366754 kubelet[2311]: E0123 18:58:50.366748 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366851 kubelet[2311]: E0123 18:58:50.366833 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366851 kubelet[2311]: W0123 18:58:50.366840 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.366851 kubelet[2311]: E0123 18:58:50.366846 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.366944 kubelet[2311]: E0123 18:58:50.366937 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.366944 kubelet[2311]: W0123 18:58:50.366942 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367008 kubelet[2311]: E0123 18:58:50.366948 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367041 kubelet[2311]: E0123 18:58:50.367032 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367041 kubelet[2311]: W0123 18:58:50.367037 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367108 kubelet[2311]: E0123 18:58:50.367043 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367149 kubelet[2311]: E0123 18:58:50.367122 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367149 kubelet[2311]: W0123 18:58:50.367127 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367149 kubelet[2311]: E0123 18:58:50.367133 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367244 kubelet[2311]: E0123 18:58:50.367221 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367244 kubelet[2311]: W0123 18:58:50.367226 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367244 kubelet[2311]: E0123 18:58:50.367232 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367338 kubelet[2311]: E0123 18:58:50.367325 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367338 kubelet[2311]: W0123 18:58:50.367330 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367338 kubelet[2311]: E0123 18:58:50.367335 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367431 kubelet[2311]: E0123 18:58:50.367414 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367431 kubelet[2311]: W0123 18:58:50.367419 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367431 kubelet[2311]: E0123 18:58:50.367425 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367540 kubelet[2311]: E0123 18:58:50.367518 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367540 kubelet[2311]: W0123 18:58:50.367523 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367540 kubelet[2311]: E0123 18:58:50.367529 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367634 kubelet[2311]: E0123 18:58:50.367619 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367634 kubelet[2311]: W0123 18:58:50.367624 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367634 kubelet[2311]: E0123 18:58:50.367631 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367728 kubelet[2311]: E0123 18:58:50.367720 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367728 kubelet[2311]: W0123 18:58:50.367724 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367784 kubelet[2311]: E0123 18:58:50.367735 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367828 kubelet[2311]: E0123 18:58:50.367815 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367828 kubelet[2311]: W0123 18:58:50.367819 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367828 kubelet[2311]: E0123 18:58:50.367825 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.367928 kubelet[2311]: E0123 18:58:50.367904 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.367928 kubelet[2311]: W0123 18:58:50.367909 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.367928 kubelet[2311]: E0123 18:58:50.367915 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.368032 kubelet[2311]: E0123 18:58:50.367994 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.368032 kubelet[2311]: W0123 18:58:50.367999 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.368032 kubelet[2311]: E0123 18:58:50.368005 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.373413 kubelet[2311]: E0123 18:58:50.373387 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.373413 kubelet[2311]: W0123 18:58:50.373404 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.373413 kubelet[2311]: E0123 18:58:50.373421 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.373639 kubelet[2311]: E0123 18:58:50.373608 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.373639 kubelet[2311]: W0123 18:58:50.373615 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.373639 kubelet[2311]: E0123 18:58:50.373624 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.373802 kubelet[2311]: E0123 18:58:50.373789 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.373802 kubelet[2311]: W0123 18:58:50.373799 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.373878 kubelet[2311]: E0123 18:58:50.373808 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.373925 kubelet[2311]: E0123 18:58:50.373911 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.373925 kubelet[2311]: W0123 18:58:50.373916 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.373925 kubelet[2311]: E0123 18:58:50.373922 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374035 kubelet[2311]: E0123 18:58:50.374026 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374035 kubelet[2311]: W0123 18:58:50.374033 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374083 kubelet[2311]: E0123 18:58:50.374041 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374164 kubelet[2311]: E0123 18:58:50.374154 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374164 kubelet[2311]: W0123 18:58:50.374161 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374214 kubelet[2311]: E0123 18:58:50.374174 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374385 kubelet[2311]: E0123 18:58:50.374351 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374385 kubelet[2311]: W0123 18:58:50.374363 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374385 kubelet[2311]: E0123 18:58:50.374372 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374543 kubelet[2311]: E0123 18:58:50.374476 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374543 kubelet[2311]: W0123 18:58:50.374496 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374543 kubelet[2311]: E0123 18:58:50.374503 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374657 kubelet[2311]: E0123 18:58:50.374593 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374657 kubelet[2311]: W0123 18:58:50.374598 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374657 kubelet[2311]: E0123 18:58:50.374604 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374762 kubelet[2311]: E0123 18:58:50.374707 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374762 kubelet[2311]: W0123 18:58:50.374712 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.374762 kubelet[2311]: E0123 18:58:50.374718 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.374979 kubelet[2311]: E0123 18:58:50.374955 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.374979 kubelet[2311]: W0123 18:58:50.374975 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.375035 kubelet[2311]: E0123 18:58:50.374983 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.375126 kubelet[2311]: E0123 18:58:50.375109 2311 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:50.375126 kubelet[2311]: W0123 18:58:50.375122 2311 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:50.375167 kubelet[2311]: E0123 18:58:50.375129 2311 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:50.496901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531033787.mount: Deactivated successfully. Jan 23 18:58:50.601760 containerd[1710]: time="2026-01-23T18:58:50.601703826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:50.607368 containerd[1710]: time="2026-01-23T18:58:50.607326269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 18:58:50.611521 containerd[1710]: time="2026-01-23T18:58:50.611316937Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:50.620054 containerd[1710]: time="2026-01-23T18:58:50.620022450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:50.620631 containerd[1710]: time="2026-01-23T18:58:50.620605346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.252491422s" Jan 23 18:58:50.620687 containerd[1710]: time="2026-01-23T18:58:50.620638357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:58:50.627209 containerd[1710]: time="2026-01-23T18:58:50.627183426Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:58:50.641506 containerd[1710]: time="2026-01-23T18:58:50.641379981Z" level=info msg="Container 8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:50.658660 containerd[1710]: time="2026-01-23T18:58:50.658632851Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51\"" Jan 23 18:58:50.659131 containerd[1710]: time="2026-01-23T18:58:50.659109460Z" level=info msg="StartContainer for \"8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51\"" Jan 23 18:58:50.660390 containerd[1710]: time="2026-01-23T18:58:50.660361960Z" level=info msg="connecting to shim 8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51" address="unix:///run/containerd/s/93bddb7da15774ebf7f6c60ee9fbd05fb8d3bb4be28764de083fe64ef2c7c3f6" protocol=ttrpc version=3 Jan 23 18:58:50.685649 systemd[1]: Started cri-containerd-8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51.scope - libcontainer container 8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51. Jan 23 18:58:50.744420 containerd[1710]: time="2026-01-23T18:58:50.744320484Z" level=info msg="StartContainer for \"8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51\" returns successfully" Jan 23 18:58:50.748336 systemd[1]: cri-containerd-8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51.scope: Deactivated successfully. Jan 23 18:58:50.752329 containerd[1710]: time="2026-01-23T18:58:50.752233368Z" level=info msg="received container exit event container_id:\"8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51\" id:\"8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51\" pid:2723 exited_at:{seconds:1769194730 nanos:751885409}" Jan 23 18:58:51.176836 kubelet[2311]: E0123 18:58:51.176789 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:51.469169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ac5980e287bf3106ead8c209e9df63830b8bd14913be52c4c06984620d73e51-rootfs.mount: Deactivated successfully. Jan 23 18:58:52.177606 kubelet[2311]: E0123 18:58:52.177552 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:52.250096 kubelet[2311]: E0123 18:58:52.249632 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:52.278041 containerd[1710]: time="2026-01-23T18:58:52.278001483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:58:52.917500 waagent[1890]: 2026-01-23T18:58:52.917414Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 18:58:52.928283 waagent[1890]: 2026-01-23T18:58:52.928240Z INFO ExtHandler Jan 23 18:58:52.928402 waagent[1890]: 2026-01-23T18:58:52.928339Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bb54722c-6047-483f-8fbb-985ddb9cff62 eTag: 16649341121362395438 source: Fabric] Jan 23 18:58:52.928688 waagent[1890]: 2026-01-23T18:58:52.928658Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:58:52.929269 waagent[1890]: 2026-01-23T18:58:52.929076Z INFO ExtHandler Jan 23 18:58:52.929349 waagent[1890]: 2026-01-23T18:58:52.929315Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 18:58:52.996287 waagent[1890]: 2026-01-23T18:58:52.996240Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:58:53.053346 waagent[1890]: 2026-01-23T18:58:53.053281Z INFO ExtHandler Downloaded certificate {'thumbprint': '51C6AF07E9D5D09059156D3EE347F6C1EAEB1622', 'hasPrivateKey': True} Jan 23 18:58:53.053759 waagent[1890]: 2026-01-23T18:58:53.053728Z INFO ExtHandler Fetch goal state completed Jan 23 18:58:53.054022 waagent[1890]: 2026-01-23T18:58:53.053995Z INFO ExtHandler ExtHandler Jan 23 18:58:53.054062 waagent[1890]: 2026-01-23T18:58:53.054048Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 13e4785e-3d92-4234-a21b-1450b4f4a63e correlation 27b48efb-acae-42d2-8cf4-aee64217b2ed created: 2026-01-23T18:58:47.115196Z] Jan 23 18:58:53.054298 waagent[1890]: 2026-01-23T18:58:53.054271Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:58:53.054692 waagent[1890]: 2026-01-23T18:58:53.054667Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 18:58:53.178881 kubelet[2311]: E0123 18:58:53.178765 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:54.179862 kubelet[2311]: E0123 18:58:54.179796 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:54.249651 kubelet[2311]: E0123 18:58:54.249604 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:54.706817 containerd[1710]: time="2026-01-23T18:58:54.706765162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:54.709052 containerd[1710]: time="2026-01-23T18:58:54.708955655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:58:54.714807 containerd[1710]: time="2026-01-23T18:58:54.714781987Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:54.719101 containerd[1710]: time="2026-01-23T18:58:54.719053936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:54.719614 containerd[1710]: time="2026-01-23T18:58:54.719424098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.441384055s" Jan 23 18:58:54.719614 containerd[1710]: time="2026-01-23T18:58:54.719455345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:58:54.724758 containerd[1710]: time="2026-01-23T18:58:54.724725898Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:58:54.742280 containerd[1710]: time="2026-01-23T18:58:54.741108265Z" level=info msg="Container e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:54.756815 containerd[1710]: time="2026-01-23T18:58:54.756781240Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723\"" Jan 23 18:58:54.757278 containerd[1710]: time="2026-01-23T18:58:54.757255426Z" level=info msg="StartContainer for \"e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723\"" Jan 23 18:58:54.758501 containerd[1710]: time="2026-01-23T18:58:54.758455466Z" level=info msg="connecting to shim e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723" address="unix:///run/containerd/s/93bddb7da15774ebf7f6c60ee9fbd05fb8d3bb4be28764de083fe64ef2c7c3f6" protocol=ttrpc version=3 Jan 23 18:58:54.781658 systemd[1]: Started cri-containerd-e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723.scope - libcontainer container e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723. Jan 23 18:58:54.850271 containerd[1710]: time="2026-01-23T18:58:54.850224164Z" level=info msg="StartContainer for \"e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723\" returns successfully" Jan 23 18:58:55.180594 kubelet[2311]: E0123 18:58:55.180541 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:55.978886 containerd[1710]: time="2026-01-23T18:58:55.978829078Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 23 18:58:55.980379 systemd[1]: cri-containerd-e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723.scope: Deactivated successfully. Jan 23 18:58:55.980833 systemd[1]: cri-containerd-e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723.scope: Consumed 422ms CPU time, 191.1M memory peak, 171.3M written to disk. Jan 23 18:58:55.982505 containerd[1710]: time="2026-01-23T18:58:55.982384655Z" level=info msg="received container exit event container_id:\"e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723\" id:\"e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723\" pid:2780 exited_at:{seconds:1769194735 nanos:982166128}" Jan 23 18:58:56.002890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6aaad08260ca1b7bd6048b76ee78d1735333203c942a9998b8c899dc4755723-rootfs.mount: Deactivated successfully. Jan 23 18:58:56.015247 kubelet[2311]: I0123 18:58:56.015202 2311 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:58:56.181060 kubelet[2311]: E0123 18:58:56.181018 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:56.255789 systemd[1]: Created slice kubepods-besteffort-pod2b33e19f_e45d_4033_97dc_1891d2a04f54.slice - libcontainer container kubepods-besteffort-pod2b33e19f_e45d_4033_97dc_1891d2a04f54.slice. Jan 23 18:58:56.258231 containerd[1710]: time="2026-01-23T18:58:56.258185149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8n9m,Uid:2b33e19f-e45d-4033-97dc-1891d2a04f54,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:56.877121 containerd[1710]: time="2026-01-23T18:58:56.877066938Z" level=error msg="Failed to destroy network for sandbox \"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:56.878939 systemd[1]: run-netns-cni\x2dac12439f\x2dd206\x2dc80f\x2d671a\x2dd88733d87092.mount: Deactivated successfully. Jan 23 18:58:56.881028 containerd[1710]: time="2026-01-23T18:58:56.880982293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8n9m,Uid:2b33e19f-e45d-4033-97dc-1891d2a04f54,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:56.881306 kubelet[2311]: E0123 18:58:56.881259 2311 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:56.881397 kubelet[2311]: E0123 18:58:56.881340 2311 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:56.881397 kubelet[2311]: E0123 18:58:56.881368 2311 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8n9m" Jan 23 18:58:56.881461 kubelet[2311]: E0123 18:58:56.881439 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f032a88cd991186323a17e9edc057c138a92babed84d80db6c8b102434b20b11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:58:57.181918 kubelet[2311]: E0123 18:58:57.181731 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:57.291389 containerd[1710]: time="2026-01-23T18:58:57.291350060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:58:58.165106 kubelet[2311]: E0123 18:58:58.165069 2311 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:58.182600 kubelet[2311]: E0123 18:58:58.182556 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:58:59.183417 kubelet[2311]: E0123 18:58:59.183362 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:00.023441 systemd[1]: Created slice kubepods-besteffort-pod3562feb3_06d5_4a12_a69a_d9897ebd300a.slice - libcontainer container kubepods-besteffort-pod3562feb3_06d5_4a12_a69a_d9897ebd300a.slice. Jan 23 18:59:00.038424 kubelet[2311]: I0123 18:59:00.038382 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjpv\" (UniqueName: \"kubernetes.io/projected/3562feb3-06d5-4a12-a69a-d9897ebd300a-kube-api-access-6sjpv\") pod \"nginx-deployment-7fcdb87857-7bhx8\" (UID: \"3562feb3-06d5-4a12-a69a-d9897ebd300a\") " pod="default/nginx-deployment-7fcdb87857-7bhx8" Jan 23 18:59:00.184498 kubelet[2311]: E0123 18:59:00.184431 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:00.328594 containerd[1710]: time="2026-01-23T18:59:00.328543449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7bhx8,Uid:3562feb3-06d5-4a12-a69a-d9897ebd300a,Namespace:default,Attempt:0,}" Jan 23 18:59:00.399802 containerd[1710]: time="2026-01-23T18:59:00.399742898Z" level=error msg="Failed to destroy network for sandbox \"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:00.402830 systemd[1]: run-netns-cni\x2dc382955b\x2d64a5\x2d7e61\x2d5495\x2df474025e09fd.mount: Deactivated successfully. Jan 23 18:59:00.405614 containerd[1710]: time="2026-01-23T18:59:00.405570259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7bhx8,Uid:3562feb3-06d5-4a12-a69a-d9897ebd300a,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:00.405968 kubelet[2311]: E0123 18:59:00.405815 2311 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:00.406504 kubelet[2311]: E0123 18:59:00.406260 2311 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-7bhx8" Jan 23 18:59:00.406504 kubelet[2311]: E0123 18:59:00.406294 2311 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-7bhx8" Jan 23 18:59:00.406504 kubelet[2311]: E0123 18:59:00.406356 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-7bhx8_default(3562feb3-06d5-4a12-a69a-d9897ebd300a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-7bhx8_default(3562feb3-06d5-4a12-a69a-d9897ebd300a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d1683b6221ddaa333d57d72b44b31fa7d64798c80ba685481169ab437d7d44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-7bhx8" podUID="3562feb3-06d5-4a12-a69a-d9897ebd300a" Jan 23 18:59:01.185537 kubelet[2311]: E0123 18:59:01.185499 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:01.498147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553834816.mount: Deactivated successfully. Jan 23 18:59:01.533417 containerd[1710]: time="2026-01-23T18:59:01.533371168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:01.535505 containerd[1710]: time="2026-01-23T18:59:01.535424990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 18:59:01.538919 containerd[1710]: time="2026-01-23T18:59:01.538875591Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:01.542095 containerd[1710]: time="2026-01-23T18:59:01.542050300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:01.542729 containerd[1710]: time="2026-01-23T18:59:01.542448293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.25105959s" Jan 23 18:59:01.542729 containerd[1710]: time="2026-01-23T18:59:01.542494992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:59:01.557971 containerd[1710]: time="2026-01-23T18:59:01.557942067Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:59:01.576503 containerd[1710]: time="2026-01-23T18:59:01.576022897Z" level=info msg="Container 1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:01.591262 containerd[1710]: time="2026-01-23T18:59:01.591230848Z" level=info msg="CreateContainer within sandbox \"0a3ea78cd468bd52dc5194e7020968a903aa576a22657525288865219ff02bef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce\"" Jan 23 18:59:01.591873 containerd[1710]: time="2026-01-23T18:59:01.591808676Z" level=info msg="StartContainer for \"1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce\"" Jan 23 18:59:01.593044 containerd[1710]: time="2026-01-23T18:59:01.593013621Z" level=info msg="connecting to shim 1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce" address="unix:///run/containerd/s/93bddb7da15774ebf7f6c60ee9fbd05fb8d3bb4be28764de083fe64ef2c7c3f6" protocol=ttrpc version=3 Jan 23 18:59:01.611643 systemd[1]: Started cri-containerd-1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce.scope - libcontainer container 1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce. Jan 23 18:59:01.676635 containerd[1710]: time="2026-01-23T18:59:01.676588705Z" level=info msg="StartContainer for \"1117aed1ceb78864fa3a4e0648e8d02f6d2ec441e581877867875e8dbcaf12ce\" returns successfully" Jan 23 18:59:01.896920 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:59:01.897050 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:59:02.185998 kubelet[2311]: E0123 18:59:02.185812 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:02.318624 kubelet[2311]: I0123 18:59:02.318574 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kskcv" podStartSLOduration=4.036192599 podStartE2EDuration="24.318560814s" podCreationTimestamp="2026-01-23 18:58:38 +0000 UTC" firstStartedPulling="2026-01-23 18:58:41.26071152 +0000 UTC m=+3.617697418" lastFinishedPulling="2026-01-23 18:59:01.543079729 +0000 UTC m=+23.900065633" observedRunningTime="2026-01-23 18:59:02.317594291 +0000 UTC m=+24.674580195" watchObservedRunningTime="2026-01-23 18:59:02.318560814 +0000 UTC m=+24.675546715" Jan 23 18:59:03.186620 kubelet[2311]: E0123 18:59:03.186565 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:03.658659 systemd-networkd[1339]: vxlan.calico: Link UP Jan 23 18:59:03.658669 systemd-networkd[1339]: vxlan.calico: Gained carrier Jan 23 18:59:04.186990 kubelet[2311]: E0123 18:59:04.186940 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:05.187967 kubelet[2311]: E0123 18:59:05.187931 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:05.204661 systemd-networkd[1339]: vxlan.calico: Gained IPv6LL Jan 23 18:59:06.188212 kubelet[2311]: E0123 18:59:06.188171 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:07.188776 kubelet[2311]: E0123 18:59:07.188708 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:08.189476 kubelet[2311]: E0123 18:59:08.189442 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:08.250336 containerd[1710]: time="2026-01-23T18:59:08.250236468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8n9m,Uid:2b33e19f-e45d-4033-97dc-1891d2a04f54,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:08.371574 systemd-networkd[1339]: cali2ddf67ca82d: Link UP Jan 23 18:59:08.372285 systemd-networkd[1339]: cali2ddf67ca82d: Gained carrier Jan 23 18:59:08.389565 containerd[1710]: 2026-01-23 18:59:08.299 [INFO][3174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.14-k8s-csi--node--driver--j8n9m-eth0 csi-node-driver- calico-system 2b33e19f-e45d-4033-97dc-1891d2a04f54 1313 0 2026-01-23 18:58:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.200.4.14 csi-node-driver-j8n9m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ddf67ca82d [] [] }} ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-" Jan 23 18:59:08.389565 containerd[1710]: 2026-01-23 18:59:08.299 [INFO][3174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.389565 containerd[1710]: 2026-01-23 18:59:08.322 [INFO][3185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" HandleID="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Workload="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.322 [INFO][3185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" HandleID="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Workload="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.200.4.14", "pod":"csi-node-driver-j8n9m", "timestamp":"2026-01-23 18:59:08.322620357 +0000 UTC"}, Hostname:"10.200.4.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.322 [INFO][3185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.322 [INFO][3185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.322 [INFO][3185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.14' Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.338 [INFO][3185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" host="10.200.4.14" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.340 [INFO][3185] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.4.14" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.344 [INFO][3185] ipam/ipam.go 511: Trying affinity for 192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.351 [INFO][3185] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.353 [INFO][3185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:08.389800 containerd[1710]: 2026-01-23 18:59:08.353 [INFO][3185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" host="10.200.4.14" Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.354 [INFO][3185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.358 [INFO][3185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" host="10.200.4.14" Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.365 [INFO][3185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.65/26] block=192.168.13.64/26 handle="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" host="10.200.4.14" Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.365 [INFO][3185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.65/26] handle="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" host="10.200.4.14" Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.365 [INFO][3185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:59:08.390125 containerd[1710]: 2026-01-23 18:59:08.365 [INFO][3185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.65/26] IPv6=[] ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" HandleID="k8s-pod-network.c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Workload="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.390284 containerd[1710]: 2026-01-23 18:59:08.368 [INFO][3174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-csi--node--driver--j8n9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b33e19f-e45d-4033-97dc-1891d2a04f54", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"", Pod:"csi-node-driver-j8n9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ddf67ca82d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:08.390362 containerd[1710]: 2026-01-23 18:59:08.369 [INFO][3174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.65/32] ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.390362 containerd[1710]: 2026-01-23 18:59:08.369 [INFO][3174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ddf67ca82d ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.390362 containerd[1710]: 2026-01-23 18:59:08.373 [INFO][3174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.390516 containerd[1710]: 2026-01-23 18:59:08.373 [INFO][3174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-csi--node--driver--j8n9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2b33e19f-e45d-4033-97dc-1891d2a04f54", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de", Pod:"csi-node-driver-j8n9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ddf67ca82d", MAC:"0a:16:9f:8d:1a:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:08.390661 containerd[1710]: 2026-01-23 18:59:08.386 [INFO][3174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" Namespace="calico-system" Pod="csi-node-driver-j8n9m" WorkloadEndpoint="10.200.4.14-k8s-csi--node--driver--j8n9m-eth0" Jan 23 18:59:08.432263 containerd[1710]: time="2026-01-23T18:59:08.432217811Z" level=info msg="connecting to shim c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de" address="unix:///run/containerd/s/4c12beda0616e97de0c8360e3a650c924200a46ac2cbc7b068679b47f189258c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:08.461832 systemd[1]: Started cri-containerd-c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de.scope - libcontainer container c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de. Jan 23 18:59:08.492652 containerd[1710]: time="2026-01-23T18:59:08.492605494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8n9m,Uid:2b33e19f-e45d-4033-97dc-1891d2a04f54,Namespace:calico-system,Attempt:0,} returns sandbox id \"c60139e3167a2edb1b46b4240d7a2c28acb9874dadf36a39aa15c03230a575de\"" Jan 23 18:59:08.494078 containerd[1710]: time="2026-01-23T18:59:08.494023221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:59:08.774145 containerd[1710]: time="2026-01-23T18:59:08.774010861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:08.776176 containerd[1710]: time="2026-01-23T18:59:08.776130578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:59:08.776249 containerd[1710]: time="2026-01-23T18:59:08.776221771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:59:08.776438 kubelet[2311]: E0123 18:59:08.776401 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:08.776514 kubelet[2311]: E0123 18:59:08.776454 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:08.776766 kubelet[2311]: E0123 18:59:08.776626 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:08.778558 containerd[1710]: time="2026-01-23T18:59:08.778528460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:59:09.043292 containerd[1710]: time="2026-01-23T18:59:09.043248030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:09.047992 containerd[1710]: time="2026-01-23T18:59:09.047952092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:59:09.048060 containerd[1710]: time="2026-01-23T18:59:09.048047674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:59:09.048239 kubelet[2311]: E0123 18:59:09.048200 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:09.048302 kubelet[2311]: E0123 18:59:09.048254 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:09.048432 kubelet[2311]: E0123 18:59:09.048397 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:09.050309 kubelet[2311]: E0123 18:59:09.050255 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:59:09.190452 kubelet[2311]: E0123 18:59:09.190410 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:09.315902 kubelet[2311]: E0123 18:59:09.315727 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:59:10.004706 systemd-networkd[1339]: cali2ddf67ca82d: Gained IPv6LL Jan 23 18:59:10.190795 kubelet[2311]: E0123 18:59:10.190729 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:10.316849 kubelet[2311]: E0123 18:59:10.316809 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:59:11.191465 kubelet[2311]: E0123 18:59:11.191419 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:12.192500 kubelet[2311]: E0123 18:59:12.192437 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:12.249886 containerd[1710]: time="2026-01-23T18:59:12.249628957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7bhx8,Uid:3562feb3-06d5-4a12-a69a-d9897ebd300a,Namespace:default,Attempt:0,}" Jan 23 18:59:12.350259 systemd-networkd[1339]: caliaa55e260ab2: Link UP Jan 23 18:59:12.350403 systemd-networkd[1339]: caliaa55e260ab2: Gained carrier Jan 23 18:59:12.366514 containerd[1710]: 2026-01-23 18:59:12.288 [INFO][3258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0 nginx-deployment-7fcdb87857- default 3562feb3-06d5-4a12-a69a-d9897ebd300a 1427 0 2026-01-23 18:58:59 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.14 nginx-deployment-7fcdb87857-7bhx8 eth0 default [] [] [kns.default ksa.default.default] caliaa55e260ab2 [] [] }} ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-" Jan 23 18:59:12.366514 containerd[1710]: 2026-01-23 18:59:12.288 [INFO][3258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.366514 containerd[1710]: 2026-01-23 18:59:12.313 [INFO][3269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" HandleID="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Workload="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.313 [INFO][3269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" HandleID="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Workload="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.14", "pod":"nginx-deployment-7fcdb87857-7bhx8", "timestamp":"2026-01-23 18:59:12.313336744 +0000 UTC"}, Hostname:"10.200.4.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.313 [INFO][3269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.313 [INFO][3269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.313 [INFO][3269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.14' Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.322 [INFO][3269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" host="10.200.4.14" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.325 [INFO][3269] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.4.14" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.330 [INFO][3269] ipam/ipam.go 511: Trying affinity for 192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.332 [INFO][3269] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.333 [INFO][3269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:12.366730 containerd[1710]: 2026-01-23 18:59:12.333 [INFO][3269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" host="10.200.4.14" Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.334 [INFO][3269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76 Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.337 [INFO][3269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" host="10.200.4.14" Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.343 [INFO][3269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.66/26] block=192.168.13.64/26 handle="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" host="10.200.4.14" Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.344 [INFO][3269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.66/26] handle="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" host="10.200.4.14" Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.344 [INFO][3269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:59:12.366974 containerd[1710]: 2026-01-23 18:59:12.344 [INFO][3269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.66/26] IPv6=[] ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" HandleID="k8s-pod-network.cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Workload="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.367103 containerd[1710]: 2026-01-23 18:59:12.346 [INFO][3258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"3562feb3-06d5-4a12-a69a-d9897ebd300a", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-7bhx8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliaa55e260ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:12.367103 containerd[1710]: 2026-01-23 18:59:12.346 [INFO][3258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.66/32] ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.367191 containerd[1710]: 2026-01-23 18:59:12.346 [INFO][3258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa55e260ab2 ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.367191 containerd[1710]: 2026-01-23 18:59:12.349 [INFO][3258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.367234 containerd[1710]: 2026-01-23 18:59:12.350 [INFO][3258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"3562feb3-06d5-4a12-a69a-d9897ebd300a", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76", Pod:"nginx-deployment-7fcdb87857-7bhx8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliaa55e260ab2", MAC:"d6:bc:8c:b8:7d:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:12.367294 containerd[1710]: 2026-01-23 18:59:12.363 [INFO][3258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" Namespace="default" Pod="nginx-deployment-7fcdb87857-7bhx8" WorkloadEndpoint="10.200.4.14-k8s-nginx--deployment--7fcdb87857--7bhx8-eth0" Jan 23 18:59:12.420012 containerd[1710]: time="2026-01-23T18:59:12.419967134Z" level=info msg="connecting to shim cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76" address="unix:///run/containerd/s/3f0bf729ecc83ffd80e4ca854782e58b9ab5dbaa94334939585667f9ac59473d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:12.446731 systemd[1]: Started cri-containerd-cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76.scope - libcontainer container cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76. Jan 23 18:59:12.492632 containerd[1710]: time="2026-01-23T18:59:12.492567398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7bhx8,Uid:3562feb3-06d5-4a12-a69a-d9897ebd300a,Namespace:default,Attempt:0,} returns sandbox id \"cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76\"" Jan 23 18:59:12.493663 containerd[1710]: time="2026-01-23T18:59:12.493635191Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 18:59:13.193497 kubelet[2311]: E0123 18:59:13.193426 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:13.909541 systemd-networkd[1339]: caliaa55e260ab2: Gained IPv6LL Jan 23 18:59:14.194617 kubelet[2311]: E0123 18:59:14.194496 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:14.818236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025019046.mount: Deactivated successfully. Jan 23 18:59:15.195381 kubelet[2311]: E0123 18:59:15.195227 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:15.591499 containerd[1710]: time="2026-01-23T18:59:15.591435933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:15.593852 containerd[1710]: time="2026-01-23T18:59:15.593756029Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 18:59:15.601220 containerd[1710]: time="2026-01-23T18:59:15.601191839Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:15.608027 containerd[1710]: time="2026-01-23T18:59:15.607980063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:15.608740 containerd[1710]: time="2026-01-23T18:59:15.608624484Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.114859627s" Jan 23 18:59:15.608740 containerd[1710]: time="2026-01-23T18:59:15.608655876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 18:59:15.614197 containerd[1710]: time="2026-01-23T18:59:15.614169644Z" level=info msg="CreateContainer within sandbox \"cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 18:59:15.636117 containerd[1710]: time="2026-01-23T18:59:15.635540357Z" level=info msg="Container 062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:15.653259 containerd[1710]: time="2026-01-23T18:59:15.653229625Z" level=info msg="CreateContainer within sandbox \"cebe1ba37a700ff5fbf0fec4b73f9df09359745c6eba108ac665a655926a7b76\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a\"" Jan 23 18:59:15.653808 containerd[1710]: time="2026-01-23T18:59:15.653747907Z" level=info msg="StartContainer for \"062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a\"" Jan 23 18:59:15.654660 containerd[1710]: time="2026-01-23T18:59:15.654627372Z" level=info msg="connecting to shim 062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a" address="unix:///run/containerd/s/3f0bf729ecc83ffd80e4ca854782e58b9ab5dbaa94334939585667f9ac59473d" protocol=ttrpc version=3 Jan 23 18:59:15.674632 systemd[1]: Started cri-containerd-062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a.scope - libcontainer container 062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a. Jan 23 18:59:15.706091 containerd[1710]: time="2026-01-23T18:59:15.706061186Z" level=info msg="StartContainer for \"062af1e44c2ec0cb6fd02697689dc452b9e577d18b49270121eb8c58cfeacf2a\" returns successfully" Jan 23 18:59:16.196045 kubelet[2311]: E0123 18:59:16.195976 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:16.338771 kubelet[2311]: I0123 18:59:16.338583 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-7bhx8" podStartSLOduration=14.222279547 podStartE2EDuration="17.338555855s" podCreationTimestamp="2026-01-23 18:58:59 +0000 UTC" firstStartedPulling="2026-01-23 18:59:12.493383024 +0000 UTC m=+34.850368921" lastFinishedPulling="2026-01-23 18:59:15.609659319 +0000 UTC m=+37.966645229" observedRunningTime="2026-01-23 18:59:16.338536446 +0000 UTC m=+38.695522350" watchObservedRunningTime="2026-01-23 18:59:16.338555855 +0000 UTC m=+38.695541757" Jan 23 18:59:17.197049 kubelet[2311]: E0123 18:59:17.196991 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:18.164807 kubelet[2311]: E0123 18:59:18.164754 2311 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:18.197362 kubelet[2311]: E0123 18:59:18.197303 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:19.197647 kubelet[2311]: E0123 18:59:19.197586 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:20.198710 kubelet[2311]: E0123 18:59:20.198649 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:21.199602 kubelet[2311]: E0123 18:59:21.199550 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:22.199926 kubelet[2311]: E0123 18:59:22.199882 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:23.142084 systemd[1]: Created slice kubepods-besteffort-pod00fd8acd_e484_453a_af75_38de28fba358.slice - libcontainer container kubepods-besteffort-pod00fd8acd_e484_453a_af75_38de28fba358.slice. Jan 23 18:59:23.172078 kubelet[2311]: I0123 18:59:23.172046 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/00fd8acd-e484-453a-af75-38de28fba358-data\") pod \"nfs-server-provisioner-0\" (UID: \"00fd8acd-e484-453a-af75-38de28fba358\") " pod="default/nfs-server-provisioner-0" Jan 23 18:59:23.172385 kubelet[2311]: I0123 18:59:23.172361 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjlvj\" (UniqueName: \"kubernetes.io/projected/00fd8acd-e484-453a-af75-38de28fba358-kube-api-access-wjlvj\") pod \"nfs-server-provisioner-0\" (UID: \"00fd8acd-e484-453a-af75-38de28fba358\") " pod="default/nfs-server-provisioner-0" Jan 23 18:59:23.200080 kubelet[2311]: E0123 18:59:23.200046 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:23.445226 containerd[1710]: time="2026-01-23T18:59:23.445106794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00fd8acd-e484-453a-af75-38de28fba358,Namespace:default,Attempt:0,}" Jan 23 18:59:23.580757 systemd-networkd[1339]: cali60e51b789ff: Link UP Jan 23 18:59:23.581796 systemd-networkd[1339]: cali60e51b789ff: Gained carrier Jan 23 18:59:23.595739 containerd[1710]: 2026-01-23 18:59:23.498 [INFO][3432] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.14-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 00fd8acd-e484-453a-af75-38de28fba358 1577 0 2026-01-23 18:59:23 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.200.4.14 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-" Jan 23 18:59:23.595739 containerd[1710]: 2026-01-23 18:59:23.499 [INFO][3432] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.595739 containerd[1710]: 2026-01-23 18:59:23.532 [INFO][3443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" HandleID="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Workload="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.532 [INFO][3443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" HandleID="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Workload="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6b0), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.14", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-23 18:59:23.532434224 +0000 UTC"}, Hostname:"10.200.4.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.532 [INFO][3443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.532 [INFO][3443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.532 [INFO][3443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.14' Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.547 [INFO][3443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" host="10.200.4.14" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.551 [INFO][3443] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.4.14" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.553 [INFO][3443] ipam/ipam.go 511: Trying affinity for 192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.557 [INFO][3443] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.561 [INFO][3443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 18:59:23.597580 containerd[1710]: 2026-01-23 18:59:23.561 [INFO][3443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" host="10.200.4.14" Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.562 [INFO][3443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27 Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.566 [INFO][3443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" host="10.200.4.14" Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.574 [INFO][3443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.67/26] block=192.168.13.64/26 handle="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" host="10.200.4.14" Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.574 [INFO][3443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.67/26] handle="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" host="10.200.4.14" Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.574 [INFO][3443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:59:23.597768 containerd[1710]: 2026-01-23 18:59:23.574 [INFO][3443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.67/26] IPv6=[] ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" HandleID="k8s-pod-network.b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Workload="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.597861 containerd[1710]: 2026-01-23 18:59:23.575 [INFO][3432] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00fd8acd-e484-453a-af75-38de28fba358", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:23.597861 containerd[1710]: 2026-01-23 18:59:23.576 [INFO][3432] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.67/32] ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.597861 containerd[1710]: 2026-01-23 18:59:23.576 [INFO][3432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.597861 containerd[1710]: 2026-01-23 18:59:23.580 [INFO][3432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.597971 containerd[1710]: 2026-01-23 18:59:23.580 [INFO][3432] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00fd8acd-e484-453a-af75-38de28fba358", ResourceVersion:"1577", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6e:bf:35:ef:10:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:59:23.597971 containerd[1710]: 2026-01-23 18:59:23.590 [INFO][3432] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.200.4.14-k8s-nfs--server--provisioner--0-eth0" Jan 23 18:59:23.639858 containerd[1710]: time="2026-01-23T18:59:23.639776844Z" level=info msg="connecting to shim b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27" address="unix:///run/containerd/s/970132a468f2d9613179dbbb4928590a4785b8a9f51d11cb5d4fc81230e17a82" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:23.661863 systemd[1]: Started cri-containerd-b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27.scope - libcontainer container b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27. Jan 23 18:59:23.712616 containerd[1710]: time="2026-01-23T18:59:23.712464914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00fd8acd-e484-453a-af75-38de28fba358,Namespace:default,Attempt:0,} returns sandbox id \"b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27\"" Jan 23 18:59:23.714658 containerd[1710]: time="2026-01-23T18:59:23.714586968Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 18:59:24.200988 kubelet[2311]: E0123 18:59:24.200949 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:24.852642 systemd-networkd[1339]: cali60e51b789ff: Gained IPv6LL Jan 23 18:59:25.201662 kubelet[2311]: E0123 18:59:25.201548 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:25.649543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173288243.mount: Deactivated successfully. Jan 23 18:59:26.202661 kubelet[2311]: E0123 18:59:26.202603 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:27.203619 kubelet[2311]: E0123 18:59:27.203574 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:28.204506 kubelet[2311]: E0123 18:59:28.204435 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:29.204743 kubelet[2311]: E0123 18:59:29.204695 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:30.205381 kubelet[2311]: E0123 18:59:30.205318 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:31.206214 kubelet[2311]: E0123 18:59:31.206174 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:32.206348 kubelet[2311]: E0123 18:59:32.206301 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:33.206651 kubelet[2311]: E0123 18:59:33.206601 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:34.207779 kubelet[2311]: E0123 18:59:34.207718 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:35.207894 kubelet[2311]: E0123 18:59:35.207835 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:36.208773 kubelet[2311]: E0123 18:59:36.208727 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:37.209660 kubelet[2311]: E0123 18:59:37.209606 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:38.164898 kubelet[2311]: E0123 18:59:38.164844 2311 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:38.210361 kubelet[2311]: E0123 18:59:38.210318 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:39.210687 kubelet[2311]: E0123 18:59:39.210647 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:40.211573 kubelet[2311]: E0123 18:59:40.211534 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:41.211930 kubelet[2311]: E0123 18:59:41.211887 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:42.212791 kubelet[2311]: E0123 18:59:42.212741 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:43.212988 kubelet[2311]: E0123 18:59:43.212932 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:44.213394 kubelet[2311]: E0123 18:59:44.213333 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:45.214193 kubelet[2311]: E0123 18:59:45.214146 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:46.214978 kubelet[2311]: E0123 18:59:46.214922 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:46.459596 containerd[1710]: time="2026-01-23T18:59:46.459541340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:46.466032 containerd[1710]: time="2026-01-23T18:59:46.465903328Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 23 18:59:46.467511 containerd[1710]: time="2026-01-23T18:59:46.467396947Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:46.471285 containerd[1710]: time="2026-01-23T18:59:46.471020222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:46.471749 containerd[1710]: time="2026-01-23T18:59:46.471724798Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 22.757105952s" Jan 23 18:59:46.471794 containerd[1710]: time="2026-01-23T18:59:46.471759138Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 18:59:46.473244 containerd[1710]: time="2026-01-23T18:59:46.473223207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:59:46.478886 containerd[1710]: time="2026-01-23T18:59:46.478852065Z" level=info msg="CreateContainer within sandbox \"b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 18:59:46.505949 containerd[1710]: time="2026-01-23T18:59:46.505920754Z" level=info msg="Container 37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:46.510449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327428180.mount: Deactivated successfully. Jan 23 18:59:46.527689 containerd[1710]: time="2026-01-23T18:59:46.527654121Z" level=info msg="CreateContainer within sandbox \"b9e8910190e16687f97d207bb9d2be30502883bfded5346038a95f0643180c27\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9\"" Jan 23 18:59:46.528508 containerd[1710]: time="2026-01-23T18:59:46.528111293Z" level=info msg="StartContainer for \"37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9\"" Jan 23 18:59:46.528954 containerd[1710]: time="2026-01-23T18:59:46.528930309Z" level=info msg="connecting to shim 37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9" address="unix:///run/containerd/s/970132a468f2d9613179dbbb4928590a4785b8a9f51d11cb5d4fc81230e17a82" protocol=ttrpc version=3 Jan 23 18:59:46.552640 systemd[1]: Started cri-containerd-37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9.scope - libcontainer container 37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9. Jan 23 18:59:46.580366 containerd[1710]: time="2026-01-23T18:59:46.580328576Z" level=info msg="StartContainer for \"37fcdf418b33e38858ab7869056c42485a1d29536b391df5dc6967ae6660ecf9\" returns successfully" Jan 23 18:59:46.751764 containerd[1710]: time="2026-01-23T18:59:46.751624596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:46.754435 containerd[1710]: time="2026-01-23T18:59:46.754386192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:59:46.754612 containerd[1710]: time="2026-01-23T18:59:46.754396465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:59:46.754676 kubelet[2311]: E0123 18:59:46.754635 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:46.754738 kubelet[2311]: E0123 18:59:46.754690 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:46.755153 kubelet[2311]: E0123 18:59:46.754860 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:46.757282 containerd[1710]: time="2026-01-23T18:59:46.757247289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:59:47.033111 containerd[1710]: time="2026-01-23T18:59:47.032974545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:47.035249 containerd[1710]: time="2026-01-23T18:59:47.035194240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:59:47.035370 containerd[1710]: time="2026-01-23T18:59:47.035287660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:59:47.035589 kubelet[2311]: E0123 18:59:47.035552 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:47.035638 kubelet[2311]: E0123 18:59:47.035604 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:47.035782 kubelet[2311]: E0123 18:59:47.035751 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:47.037266 kubelet[2311]: E0123 18:59:47.037226 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 18:59:47.215838 kubelet[2311]: E0123 18:59:47.215782 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:47.402156 kubelet[2311]: I0123 18:59:47.402098 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.6436146969999998 podStartE2EDuration="24.402085701s" podCreationTimestamp="2026-01-23 18:59:23 +0000 UTC" firstStartedPulling="2026-01-23 18:59:23.714287797 +0000 UTC m=+46.071273706" lastFinishedPulling="2026-01-23 18:59:46.472758809 +0000 UTC m=+68.829744710" observedRunningTime="2026-01-23 18:59:47.401338761 +0000 UTC m=+69.758324667" watchObservedRunningTime="2026-01-23 18:59:47.402085701 +0000 UTC m=+69.759071606" Jan 23 18:59:48.216372 kubelet[2311]: E0123 18:59:48.216302 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:49.216806 kubelet[2311]: E0123 18:59:49.216744 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:50.217049 kubelet[2311]: E0123 18:59:50.216995 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:51.217781 kubelet[2311]: E0123 18:59:51.217722 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:52.218176 kubelet[2311]: E0123 18:59:52.218126 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:53.218458 kubelet[2311]: E0123 18:59:53.218404 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:54.219544 kubelet[2311]: E0123 18:59:54.219466 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:55.220210 kubelet[2311]: E0123 18:59:55.220152 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:56.220944 kubelet[2311]: E0123 18:59:56.220886 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:57.221789 kubelet[2311]: E0123 18:59:57.221735 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:58.164633 kubelet[2311]: E0123 18:59:58.164590 2311 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:58.222217 kubelet[2311]: E0123 18:59:58.222180 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 18:59:59.222643 kubelet[2311]: E0123 18:59:59.222589 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:00.223464 kubelet[2311]: E0123 19:00:00.223394 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:01.223825 kubelet[2311]: E0123 19:00:01.223768 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:01.250331 kubelet[2311]: E0123 19:00:01.250273 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 19:00:02.224734 kubelet[2311]: E0123 19:00:02.224689 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:03.225309 kubelet[2311]: E0123 19:00:03.225268 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:04.225565 kubelet[2311]: E0123 19:00:04.225517 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:05.226855 kubelet[2311]: E0123 19:00:05.226798 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:06.227468 kubelet[2311]: E0123 19:00:06.227421 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:06.820793 systemd[1]: Created slice kubepods-besteffort-podebce6dcd_7f7d_4297_82e8_2dd1734f429c.slice - libcontainer container kubepods-besteffort-podebce6dcd_7f7d_4297_82e8_2dd1734f429c.slice. Jan 23 19:00:06.915749 kubelet[2311]: I0123 19:00:06.915650 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59b7e7ab-3904-4e7f-a359-000ac9fb7ad9\" (UniqueName: \"kubernetes.io/nfs/ebce6dcd-7f7d-4297-82e8-2dd1734f429c-pvc-59b7e7ab-3904-4e7f-a359-000ac9fb7ad9\") pod \"test-pod-1\" (UID: \"ebce6dcd-7f7d-4297-82e8-2dd1734f429c\") " pod="default/test-pod-1" Jan 23 19:00:06.915749 kubelet[2311]: I0123 19:00:06.915696 2311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndx6\" (UniqueName: \"kubernetes.io/projected/ebce6dcd-7f7d-4297-82e8-2dd1734f429c-kube-api-access-tndx6\") pod \"test-pod-1\" (UID: \"ebce6dcd-7f7d-4297-82e8-2dd1734f429c\") " pod="default/test-pod-1" Jan 23 19:00:07.130518 kernel: netfs: FS-Cache loaded Jan 23 19:00:07.224033 kernel: RPC: Registered named UNIX socket transport module. Jan 23 19:00:07.224173 kernel: RPC: Registered udp transport module. Jan 23 19:00:07.224193 kernel: RPC: Registered tcp transport module. Jan 23 19:00:07.224219 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 19:00:07.224234 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 19:00:07.228290 kubelet[2311]: E0123 19:00:07.228225 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:07.486673 kernel: NFS: Registering the id_resolver key type Jan 23 19:00:07.486816 kernel: Key type id_resolver registered Jan 23 19:00:07.486831 kernel: Key type id_legacy registered Jan 23 19:00:07.610449 nfsidmap[3695]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.3-a-cd2f1addba' Jan 23 19:00:07.624835 nfsidmap[3696]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.3-a-cd2f1addba' Jan 23 19:00:07.639931 nfsrahead[3698]: setting /var/lib/kubelet/pods/ebce6dcd-7f7d-4297-82e8-2dd1734f429c/volumes/kubernetes.io~nfs/pvc-59b7e7ab-3904-4e7f-a359-000ac9fb7ad9 readahead to 128 Jan 23 19:00:07.723609 containerd[1710]: time="2026-01-23T19:00:07.723564496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ebce6dcd-7f7d-4297-82e8-2dd1734f429c,Namespace:default,Attempt:0,}" Jan 23 19:00:07.826846 systemd-networkd[1339]: cali5ec59c6bf6e: Link UP Jan 23 19:00:07.827732 systemd-networkd[1339]: cali5ec59c6bf6e: Gained carrier Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.765 [INFO][3699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.200.4.14-k8s-test--pod--1-eth0 default ebce6dcd-7f7d-4297-82e8-2dd1734f429c 1755 0 2026-01-23 18:59:24 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.200.4.14 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.765 [INFO][3699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.786 [INFO][3711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" HandleID="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Workload="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.786 [INFO][3711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" HandleID="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Workload="10.200.4.14-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"default", "node":"10.200.4.14", "pod":"test-pod-1", "timestamp":"2026-01-23 19:00:07.786278063 +0000 UTC"}, Hostname:"10.200.4.14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.786 [INFO][3711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.786 [INFO][3711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.786 [INFO][3711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.200.4.14' Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.798 [INFO][3711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.801 [INFO][3711] ipam/ipam.go 394: Looking up existing affinities for host host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.804 [INFO][3711] ipam/ipam.go 511: Trying affinity for 192.168.13.64/26 host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.805 [INFO][3711] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.806 [INFO][3711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.64/26 host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.806 [INFO][3711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.13.64/26 handle="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.807 [INFO][3711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.814 [INFO][3711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.13.64/26 handle="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.820 [INFO][3711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.13.68/26] block=192.168.13.64/26 handle="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.820 [INFO][3711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.68/26] handle="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" host="10.200.4.14" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.820 [INFO][3711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.820 [INFO][3711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.13.68/26] IPv6=[] ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" HandleID="k8s-pod-network.9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Workload="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.842817 containerd[1710]: 2026-01-23 19:00:07.822 [INFO][3699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"ebce6dcd-7f7d-4297-82e8-2dd1734f429c", ResourceVersion:"1755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.844190 containerd[1710]: 2026-01-23 19:00:07.822 [INFO][3699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.68/32] ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.844190 containerd[1710]: 2026-01-23 19:00:07.822 [INFO][3699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.844190 containerd[1710]: 2026-01-23 19:00:07.828 [INFO][3699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.844190 containerd[1710]: 2026-01-23 19:00:07.828 [INFO][3699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.200.4.14-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"ebce6dcd-7f7d-4297-82e8-2dd1734f429c", ResourceVersion:"1755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.200.4.14", ContainerID:"9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"26:e4:0f:12:8b:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.844190 containerd[1710]: 2026-01-23 19:00:07.841 [INFO][3699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.200.4.14-k8s-test--pod--1-eth0" Jan 23 19:00:07.879638 containerd[1710]: time="2026-01-23T19:00:07.879348411Z" level=info msg="connecting to shim 9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e" address="unix:///run/containerd/s/e98e084f953511888b2ac08f855adb0834e6a89234f0832ae7e1f1e788b38c92" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:07.896679 systemd[1]: Started cri-containerd-9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e.scope - libcontainer container 9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e. Jan 23 19:00:07.943402 containerd[1710]: time="2026-01-23T19:00:07.943285240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ebce6dcd-7f7d-4297-82e8-2dd1734f429c,Namespace:default,Attempt:0,} returns sandbox id \"9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e\"" Jan 23 19:00:07.944976 containerd[1710]: time="2026-01-23T19:00:07.944955868Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 19:00:08.228776 kubelet[2311]: E0123 19:00:08.228643 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:09.068263 containerd[1710]: time="2026-01-23T19:00:09.068205742Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:09.070693 containerd[1710]: time="2026-01-23T19:00:09.070656763Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 19:00:09.072706 containerd[1710]: time="2026-01-23T19:00:09.072677664Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 1.127599542s" Jan 23 19:00:09.072820 containerd[1710]: time="2026-01-23T19:00:09.072709846Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 19:00:09.078958 containerd[1710]: time="2026-01-23T19:00:09.078927887Z" level=info msg="CreateContainer within sandbox \"9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 19:00:09.099509 containerd[1710]: time="2026-01-23T19:00:09.096743809Z" level=info msg="Container b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:09.114259 containerd[1710]: time="2026-01-23T19:00:09.114214016Z" level=info msg="CreateContainer within sandbox \"9a9570ef3873055895341044b0eb18501a15a7af7903ef99282748b52a5a022e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb\"" Jan 23 19:00:09.114810 containerd[1710]: time="2026-01-23T19:00:09.114734587Z" level=info msg="StartContainer for \"b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb\"" Jan 23 19:00:09.116162 containerd[1710]: time="2026-01-23T19:00:09.116125758Z" level=info msg="connecting to shim b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb" address="unix:///run/containerd/s/e98e084f953511888b2ac08f855adb0834e6a89234f0832ae7e1f1e788b38c92" protocol=ttrpc version=3 Jan 23 19:00:09.138654 systemd[1]: Started cri-containerd-b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb.scope - libcontainer container b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb. Jan 23 19:00:09.164682 containerd[1710]: time="2026-01-23T19:00:09.164593066Z" level=info msg="StartContainer for \"b9126066ef4a4a08a75396c07135909dfcac80412d69ca4930630150712fb6cb\" returns successfully" Jan 23 19:00:09.228809 kubelet[2311]: E0123 19:00:09.228752 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:09.588906 systemd-networkd[1339]: cali5ec59c6bf6e: Gained IPv6LL Jan 23 19:00:10.229416 kubelet[2311]: E0123 19:00:10.229373 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:11.230266 kubelet[2311]: E0123 19:00:11.230209 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:12.231305 kubelet[2311]: E0123 19:00:12.231262 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:13.232091 kubelet[2311]: E0123 19:00:13.232023 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:14.233145 kubelet[2311]: E0123 19:00:14.233056 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:15.233593 kubelet[2311]: E0123 19:00:15.233551 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:16.233860 kubelet[2311]: E0123 19:00:16.233811 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:16.251586 containerd[1710]: time="2026-01-23T19:00:16.251512209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:00:16.261530 kubelet[2311]: I0123 19:00:16.261458 2311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=51.1325383 podStartE2EDuration="52.261444912s" podCreationTimestamp="2026-01-23 18:59:24 +0000 UTC" firstStartedPulling="2026-01-23 19:00:07.944479912 +0000 UTC m=+90.301521530" lastFinishedPulling="2026-01-23 19:00:09.073442236 +0000 UTC m=+91.430428142" observedRunningTime="2026-01-23 19:00:09.447858503 +0000 UTC m=+91.804844407" watchObservedRunningTime="2026-01-23 19:00:16.261444912 +0000 UTC m=+98.618430818" Jan 23 19:00:16.520235 containerd[1710]: time="2026-01-23T19:00:16.520091368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:16.522531 containerd[1710]: time="2026-01-23T19:00:16.522453468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:00:16.522531 containerd[1710]: time="2026-01-23T19:00:16.522500243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:00:16.522786 kubelet[2311]: E0123 19:00:16.522740 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:16.522832 kubelet[2311]: E0123 19:00:16.522811 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:16.523234 kubelet[2311]: E0123 19:00:16.522969 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:16.525197 containerd[1710]: time="2026-01-23T19:00:16.525152476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:00:17.234421 kubelet[2311]: E0123 19:00:17.234367 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:17.493883 containerd[1710]: time="2026-01-23T19:00:17.493742555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:17.497531 containerd[1710]: time="2026-01-23T19:00:17.497422495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:00:17.497531 containerd[1710]: time="2026-01-23T19:00:17.497450492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:00:17.497736 kubelet[2311]: E0123 19:00:17.497699 2311 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:17.497782 kubelet[2311]: E0123 19:00:17.497749 2311 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:17.497912 kubelet[2311]: E0123 19:00:17.497878 2311 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56gf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-j8n9m_calico-system(2b33e19f-e45d-4033-97dc-1891d2a04f54): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:17.499251 kubelet[2311]: E0123 19:00:17.499224 2311 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-j8n9m" podUID="2b33e19f-e45d-4033-97dc-1891d2a04f54" Jan 23 19:00:18.164447 kubelet[2311]: E0123 19:00:18.164381 2311 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:18.234994 kubelet[2311]: E0123 19:00:18.234954 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 19:00:19.235503 kubelet[2311]: E0123 19:00:19.235445 2311 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"