Jan 23 01:10:24.893413 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:10:24.893453 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:24.893472 kernel: BIOS-provided physical RAM map: Jan 23 01:10:24.895546 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:10:24.895562 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 23 01:10:24.895573 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:10:24.895588 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:10:24.895601 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:10:24.895614 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:10:24.895626 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:10:24.895639 kernel: NX (Execute Disable) protection: active Jan 23 01:10:24.895656 kernel: APIC: Static calls initialized Jan 23 01:10:24.895668 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 23 01:10:24.895680 kernel: extended physical RAM map: Jan 23 01:10:24.895696 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:10:24.895710 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 23 01:10:24.895727 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 23 01:10:24.895740 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 23 01:10:24.895754 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:10:24.895768 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:10:24.895783 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:10:24.895797 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:10:24.895812 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:10:24.895827 kernel: efi: EFI v2.7 by EDK II Jan 23 01:10:24.895841 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 23 01:10:24.895855 kernel: secureboot: Secure boot disabled Jan 23 01:10:24.895868 kernel: SMBIOS 2.7 present. Jan 23 01:10:24.895885 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 23 01:10:24.895899 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:10:24.895911 kernel: Hypervisor detected: KVM Jan 23 01:10:24.895924 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:10:24.895938 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:10:24.895952 kernel: kvm-clock: using sched offset of 5279501356 cycles Jan 23 01:10:24.895968 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:10:24.895983 kernel: tsc: Detected 2499.998 MHz processor Jan 23 01:10:24.895999 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:10:24.896014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:10:24.896032 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:10:24.896047 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:10:24.896062 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:10:24.896083 kernel: Using GB pages for direct mapping Jan 23 01:10:24.896099 kernel: ACPI: Early table checksum verification disabled Jan 23 01:10:24.896115 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 23 01:10:24.896131 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 01:10:24.896150 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 01:10:24.896166 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 01:10:24.896182 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 23 01:10:24.896198 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 23 01:10:24.896213 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 01:10:24.896229 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 01:10:24.896245 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 23 01:10:24.896262 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 23 01:10:24.896280 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:10:24.896296 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:10:24.896312 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 23 01:10:24.896327 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 23 01:10:24.896343 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 23 01:10:24.896359 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 23 01:10:24.896373 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 23 01:10:24.896388 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 23 01:10:24.896406 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 23 01:10:24.896421 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 23 01:10:24.896436 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 23 01:10:24.896451 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 23 01:10:24.896466 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 23 01:10:24.896520 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 23 01:10:24.896533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 23 01:10:24.896546 kernel: NUMA: Initialized distance table, cnt=1 Jan 23 01:10:24.896558 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 23 01:10:24.896569 kernel: Zone ranges: Jan 23 01:10:24.896584 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:10:24.896596 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 23 01:10:24.896608 kernel: Normal empty Jan 23 01:10:24.896619 kernel: Device empty Jan 23 01:10:24.896633 kernel: Movable zone start for each node Jan 23 01:10:24.896648 kernel: Early memory node ranges Jan 23 01:10:24.896661 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:10:24.896675 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 23 01:10:24.896689 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 23 01:10:24.896706 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 23 01:10:24.896720 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:10:24.896734 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:10:24.896749 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 23 01:10:24.896763 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 23 01:10:24.896778 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 01:10:24.896792 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:10:24.896806 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 23 01:10:24.896819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:10:24.896836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:10:24.896850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:10:24.896864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:10:24.896878 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:10:24.896892 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:10:24.896906 kernel: TSC deadline timer available Jan 23 01:10:24.896920 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:10:24.896935 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:10:24.896949 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:10:24.896963 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:10:24.896980 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:10:24.896994 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:10:24.897008 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:10:24.897022 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:10:24.897036 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 23 01:10:24.897050 kernel: Booting paravirtualized kernel on KVM Jan 23 01:10:24.897064 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:10:24.897078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:10:24.897093 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:10:24.897110 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:10:24.897124 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:10:24.897138 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:10:24.897152 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:10:24.897170 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:24.897184 kernel: random: crng init done Jan 23 01:10:24.897198 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:10:24.897213 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:10:24.897229 kernel: Fallback order for Node 0: 0 Jan 23 01:10:24.897241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 23 01:10:24.897253 kernel: Policy zone: DMA32 Jan 23 01:10:24.897275 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:10:24.897290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:10:24.897303 kernel: Kernel/User page tables isolation: enabled Jan 23 01:10:24.897317 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:10:24.897331 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:10:24.897344 kernel: Dynamic Preempt: voluntary Jan 23 01:10:24.897357 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:10:24.897371 kernel: rcu: RCU event tracing is enabled. Jan 23 01:10:24.897387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:10:24.897406 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:10:24.897421 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:10:24.897436 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:10:24.897449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:10:24.897462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:10:24.899520 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:24.899545 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:24.899562 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:24.899579 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:10:24.899595 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:10:24.899611 kernel: Console: colour dummy device 80x25 Jan 23 01:10:24.899626 kernel: printk: legacy console [tty0] enabled Jan 23 01:10:24.899642 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:10:24.899658 kernel: ACPI: Core revision 20240827 Jan 23 01:10:24.899679 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 23 01:10:24.899696 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:10:24.899711 kernel: x2apic enabled Jan 23 01:10:24.899727 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:10:24.899744 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 01:10:24.899759 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 23 01:10:24.899775 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 23 01:10:24.899791 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 23 01:10:24.899807 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:10:24.899825 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:10:24.899840 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:10:24.899856 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 01:10:24.899872 kernel: RETBleed: Vulnerable Jan 23 01:10:24.899887 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:10:24.899903 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:10:24.899918 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:10:24.899934 kernel: GDS: Unknown: Dependent on hypervisor status Jan 23 01:10:24.899949 kernel: active return thunk: its_return_thunk Jan 23 01:10:24.899965 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:10:24.899980 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:10:24.899999 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:10:24.900014 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:10:24.900030 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 23 01:10:24.900046 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 23 01:10:24.900061 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:10:24.900077 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:10:24.900093 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:10:24.900108 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:10:24.900124 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:10:24.900139 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 23 01:10:24.900155 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 23 01:10:24.900173 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 23 01:10:24.900189 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 23 01:10:24.900203 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 23 01:10:24.900216 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 23 01:10:24.900230 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 23 01:10:24.900245 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:10:24.900260 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:10:24.900276 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:10:24.900293 kernel: landlock: Up and running. Jan 23 01:10:24.900309 kernel: SELinux: Initializing. Jan 23 01:10:24.900326 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:10:24.900345 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:10:24.900362 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 23 01:10:24.900378 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 23 01:10:24.900395 kernel: signal: max sigframe size: 3632 Jan 23 01:10:24.900412 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:10:24.900430 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:10:24.900447 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:10:24.900464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:10:24.900500 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:10:24.900517 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:10:24.900538 kernel: .... node #0, CPUs: #1 Jan 23 01:10:24.900555 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 01:10:24.900573 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 01:10:24.900589 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:10:24.900606 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 23 01:10:24.900622 kernel: Memory: 1899856K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 133384K reserved, 0K cma-reserved) Jan 23 01:10:24.900636 kernel: devtmpfs: initialized Jan 23 01:10:24.900649 kernel: x86/mm: Memory block size: 128MB Jan 23 01:10:24.900665 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 23 01:10:24.900680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:10:24.900693 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:10:24.900707 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:10:24.900721 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:10:24.900736 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:10:24.900750 kernel: audit: type=2000 audit(1769130622.618:1): state=initialized audit_enabled=0 res=1 Jan 23 01:10:24.900764 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:10:24.900779 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:10:24.900798 kernel: cpuidle: using governor menu Jan 23 01:10:24.900813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:10:24.900828 kernel: dca service started, version 1.12.1 Jan 23 01:10:24.900891 kernel: PCI: Using configuration type 1 for base access Jan 23 01:10:24.900989 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:10:24.901042 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:10:24.901057 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:10:24.901072 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:10:24.901087 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:10:24.901107 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:10:24.901122 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:10:24.901139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:10:24.901155 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 01:10:24.901171 kernel: ACPI: Interpreter enabled Jan 23 01:10:24.901187 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:10:24.901203 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:10:24.901219 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:10:24.901236 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:10:24.901255 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 23 01:10:24.901273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:10:24.905614 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:10:24.905857 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 01:10:24.906001 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 01:10:24.906020 kernel: acpiphp: Slot [3] registered Jan 23 01:10:24.906036 kernel: acpiphp: Slot [4] registered Jan 23 01:10:24.906056 kernel: acpiphp: Slot [5] registered Jan 23 01:10:24.906081 kernel: acpiphp: Slot [6] registered Jan 23 01:10:24.906094 kernel: acpiphp: Slot [7] registered Jan 23 01:10:24.906108 kernel: acpiphp: Slot [8] registered Jan 23 01:10:24.906121 kernel: acpiphp: Slot [9] registered Jan 23 01:10:24.906134 kernel: acpiphp: Slot [10] registered Jan 23 01:10:24.906147 kernel: acpiphp: Slot [11] registered Jan 23 01:10:24.906160 kernel: acpiphp: Slot [12] registered Jan 23 01:10:24.906175 kernel: acpiphp: Slot [13] registered Jan 23 01:10:24.906189 kernel: acpiphp: Slot [14] registered Jan 23 01:10:24.906207 kernel: acpiphp: Slot [15] registered Jan 23 01:10:24.906220 kernel: acpiphp: Slot [16] registered Jan 23 01:10:24.906234 kernel: acpiphp: Slot [17] registered Jan 23 01:10:24.906248 kernel: acpiphp: Slot [18] registered Jan 23 01:10:24.906262 kernel: acpiphp: Slot [19] registered Jan 23 01:10:24.906275 kernel: acpiphp: Slot [20] registered Jan 23 01:10:24.906288 kernel: acpiphp: Slot [21] registered Jan 23 01:10:24.906304 kernel: acpiphp: Slot [22] registered Jan 23 01:10:24.906319 kernel: acpiphp: Slot [23] registered Jan 23 01:10:24.906337 kernel: acpiphp: Slot [24] registered Jan 23 01:10:24.906353 kernel: acpiphp: Slot [25] registered Jan 23 01:10:24.906368 kernel: acpiphp: Slot [26] registered Jan 23 01:10:24.906383 kernel: acpiphp: Slot [27] registered Jan 23 01:10:24.906399 kernel: acpiphp: Slot [28] registered Jan 23 01:10:24.906415 kernel: acpiphp: Slot [29] registered Jan 23 01:10:24.906431 kernel: acpiphp: Slot [30] registered Jan 23 01:10:24.906446 kernel: acpiphp: Slot [31] registered Jan 23 01:10:24.906460 kernel: PCI host bridge to bus 0000:00 Jan 23 01:10:24.906637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:10:24.906763 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:10:24.906882 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:10:24.909769 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 23 01:10:24.909928 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 23 01:10:24.910051 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:10:24.910330 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:10:24.915333 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:10:24.916392 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 23 01:10:24.916593 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 01:10:24.916746 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 23 01:10:24.916887 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 23 01:10:24.917019 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 23 01:10:24.917157 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 23 01:10:24.917300 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 23 01:10:24.917446 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 23 01:10:24.917634 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:10:24.917770 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 23 01:10:24.917901 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:10:24.918039 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:10:24.918189 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 23 01:10:24.918310 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 23 01:10:24.918440 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 23 01:10:24.923687 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 23 01:10:24.923731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:10:24.923748 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:10:24.923764 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:10:24.923787 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:10:24.923803 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 01:10:24.923819 kernel: iommu: Default domain type: Translated Jan 23 01:10:24.923834 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:10:24.923850 kernel: efivars: Registered efivars operations Jan 23 01:10:24.923866 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:10:24.923881 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:10:24.923897 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 23 01:10:24.923912 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 23 01:10:24.923930 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 23 01:10:24.924078 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 23 01:10:24.924215 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 23 01:10:24.924350 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:10:24.924371 kernel: vgaarb: loaded Jan 23 01:10:24.924387 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 23 01:10:24.924402 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 23 01:10:24.924418 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:10:24.924437 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:10:24.924453 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:10:24.924469 kernel: pnp: PnP ACPI init Jan 23 01:10:24.924500 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:10:24.924516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:10:24.924532 kernel: NET: Registered PF_INET protocol family Jan 23 01:10:24.924547 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:10:24.924563 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 01:10:24.924580 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:10:24.924599 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:10:24.924613 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 01:10:24.924627 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 01:10:24.924641 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:10:24.924655 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:10:24.924670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:10:24.924685 kernel: NET: Registered PF_XDP protocol family Jan 23 01:10:24.924809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:10:24.924923 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:10:24.925038 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:10:24.925149 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 23 01:10:24.925276 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 23 01:10:24.925405 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 01:10:24.925423 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:10:24.925439 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:10:24.925455 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 23 01:10:24.925470 kernel: clocksource: Switched to clocksource tsc Jan 23 01:10:24.925527 kernel: Initialise system trusted keyrings Jan 23 01:10:24.925540 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 01:10:24.925553 kernel: Key type asymmetric registered Jan 23 01:10:24.925565 kernel: Asymmetric key parser 'x509' registered Jan 23 01:10:24.925578 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:10:24.925592 kernel: io scheduler mq-deadline registered Jan 23 01:10:24.925606 kernel: io scheduler kyber registered Jan 23 01:10:24.925620 kernel: io scheduler bfq registered Jan 23 01:10:24.925636 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:10:24.925653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:10:24.925666 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:10:24.925679 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:10:24.925693 kernel: i8042: Warning: Keylock active Jan 23 01:10:24.925707 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:10:24.925720 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:10:24.925877 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 01:10:24.926000 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 01:10:24.926226 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T01:10:24 UTC (1769130624) Jan 23 01:10:24.926509 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 01:10:24.926557 kernel: intel_pstate: CPU model not supported Jan 23 01:10:24.926576 kernel: efifb: probing for efifb Jan 23 01:10:24.926591 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 23 01:10:24.926605 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 23 01:10:24.926620 kernel: efifb: scrolling: redraw Jan 23 01:10:24.926636 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:10:24.926653 kernel: Console: switching to colour frame buffer device 100x37 Jan 23 01:10:24.926671 kernel: fb0: EFI VGA frame buffer device Jan 23 01:10:24.929537 kernel: pstore: Using crash dump compression: deflate Jan 23 01:10:24.929566 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:10:24.929582 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:10:24.929598 kernel: Segment Routing with IPv6 Jan 23 01:10:24.929614 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:10:24.929630 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:10:24.929645 kernel: Key type dns_resolver registered Jan 23 01:10:24.929660 kernel: IPI shorthand broadcast: enabled Jan 23 01:10:24.929681 kernel: sched_clock: Marking stable (2673002856, 213304334)->(3043398668, -157091478) Jan 23 01:10:24.929696 kernel: registered taskstats version 1 Jan 23 01:10:24.929711 kernel: Loading compiled-in X.509 certificates Jan 23 01:10:24.929727 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:10:24.929742 kernel: Demotion targets for Node 0: null Jan 23 01:10:24.929757 kernel: Key type .fscrypt registered Jan 23 01:10:24.929772 kernel: Key type fscrypt-provisioning registered Jan 23 01:10:24.929786 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:10:24.929802 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:10:24.929819 kernel: ima: No architecture policies found Jan 23 01:10:24.929834 kernel: clk: Disabling unused clocks Jan 23 01:10:24.929850 kernel: Warning: unable to open an initial console. Jan 23 01:10:24.929865 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:10:24.929880 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:10:24.929898 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:10:24.929916 kernel: Run /init as init process Jan 23 01:10:24.929931 kernel: with arguments: Jan 23 01:10:24.929946 kernel: /init Jan 23 01:10:24.929961 kernel: with environment: Jan 23 01:10:24.929979 kernel: HOME=/ Jan 23 01:10:24.929994 kernel: TERM=linux Jan 23 01:10:24.930011 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:10:24.930032 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:10:24.930051 systemd[1]: Detected virtualization amazon. Jan 23 01:10:24.930078 systemd[1]: Detected architecture x86-64. Jan 23 01:10:24.930094 systemd[1]: Running in initrd. Jan 23 01:10:24.930109 systemd[1]: No hostname configured, using default hostname. Jan 23 01:10:24.930125 systemd[1]: Hostname set to . Jan 23 01:10:24.930141 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:10:24.930156 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:10:24.930174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:24.930190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:24.930208 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:10:24.930224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:10:24.930239 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:10:24.930256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:10:24.930274 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:10:24.930293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:10:24.930309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:24.930324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:24.930340 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:10:24.930357 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:10:24.930372 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:10:24.930388 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:10:24.930405 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:10:24.930421 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:10:24.930440 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:10:24.930456 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:10:24.930472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:24.930592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:24.930608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:24.930624 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:10:24.930640 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:10:24.930655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:10:24.930676 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:10:24.930693 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:10:24.930710 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:10:24.930727 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:10:24.930745 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:10:24.930763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:24.930781 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:10:24.930804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:24.930822 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:10:24.930883 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 01:10:24.930929 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:10:24.930948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:24.930967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:10:24.930988 systemd-journald[187]: Journal started Jan 23 01:10:24.931024 systemd-journald[187]: Runtime Journal (/run/log/journal/ec299804e1c7c31b773be9b957ddf576) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:10:24.935527 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:10:24.939012 systemd-modules-load[189]: Inserted module 'overlay' Jan 23 01:10:24.942674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:10:24.946593 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:10:24.954850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:10:24.963553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:10:24.966693 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:10:24.998526 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:10:24.998819 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:10:25.006589 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:10:25.001845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:25.011810 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 23 01:10:25.013596 kernel: Bridge firewalling registered Jan 23 01:10:25.013890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:25.019365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:25.021410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:25.029311 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:25.044150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:25.049788 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:10:25.113886 systemd-resolved[255]: Positive Trust Anchors: Jan 23 01:10:25.113902 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:10:25.113965 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:10:25.122381 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 23 01:10:25.125617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:10:25.127077 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:25.142515 kernel: SCSI subsystem initialized Jan 23 01:10:25.153519 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:10:25.164510 kernel: iscsi: registered transport (tcp) Jan 23 01:10:25.187747 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:10:25.187830 kernel: QLogic iSCSI HBA Driver Jan 23 01:10:25.208608 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:10:25.226205 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:25.229964 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:10:25.278408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:10:25.280939 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:10:25.337519 kernel: raid6: avx512x4 gen() 17764 MB/s Jan 23 01:10:25.355509 kernel: raid6: avx512x2 gen() 17712 MB/s Jan 23 01:10:25.373511 kernel: raid6: avx512x1 gen() 17668 MB/s Jan 23 01:10:25.391507 kernel: raid6: avx2x4 gen() 17608 MB/s Jan 23 01:10:25.409519 kernel: raid6: avx2x2 gen() 17570 MB/s Jan 23 01:10:25.428760 kernel: raid6: avx2x1 gen() 13382 MB/s Jan 23 01:10:25.428833 kernel: raid6: using algorithm avx512x4 gen() 17764 MB/s Jan 23 01:10:25.448752 kernel: raid6: .... xor() 7331 MB/s, rmw enabled Jan 23 01:10:25.448847 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:10:25.471523 kernel: xor: automatically using best checksumming function avx Jan 23 01:10:25.643519 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:10:25.650458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:10:25.652858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:25.683896 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 23 01:10:25.690656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:25.694292 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:10:25.719555 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Jan 23 01:10:25.749334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:10:25.751816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:10:25.812802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:25.815997 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:10:25.932504 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:10:25.940504 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:10:25.944874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:25.946001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:25.949398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:25.953117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:25.965670 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 01:10:25.965910 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 23 01:10:25.972621 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:26.015710 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 01:10:26.015917 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 01:10:26.016087 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 01:10:26.016235 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 23 01:10:26.016389 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:74:66:d9:72:25 Jan 23 01:10:26.017644 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:10:26.017671 kernel: GPT:9289727 != 33554431 Jan 23 01:10:26.017690 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:10:26.017711 kernel: GPT:9289727 != 33554431 Jan 23 01:10:26.017730 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:10:26.017747 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:26.013063 (udev-worker)[493]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:26.024354 kernel: AES CTR mode by8 optimization enabled Jan 23 01:10:26.015317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:26.015446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:26.019325 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:26.025394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:26.084709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:26.085569 kernel: nvme nvme0: using unchecked data buffer Jan 23 01:10:26.207015 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 01:10:26.237417 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 01:10:26.238610 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:10:26.259104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:10:26.268628 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 01:10:26.269325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 01:10:26.270886 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:10:26.272038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:26.273194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:10:26.275207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:10:26.279716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:10:26.295781 disk-uuid[669]: Primary Header is updated. Jan 23 01:10:26.295781 disk-uuid[669]: Secondary Entries is updated. Jan 23 01:10:26.295781 disk-uuid[669]: Secondary Header is updated. Jan 23 01:10:26.304614 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:26.304371 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:10:26.318506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:27.327148 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:27.329004 disk-uuid[674]: The operation has completed successfully. Jan 23 01:10:27.453867 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:10:27.453971 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:10:27.481798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:10:27.495418 sh[935]: Success Jan 23 01:10:27.515841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:10:27.515931 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:10:27.519182 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:10:27.531518 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 01:10:27.638191 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:10:27.642572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:10:27.655078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:10:27.678557 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (958) Jan 23 01:10:27.682720 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:10:27.684571 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:27.792014 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:10:27.792086 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:10:27.792099 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:10:27.795919 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:10:27.796819 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:10:27.797326 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:10:27.798629 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:10:27.799993 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:10:27.840519 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (989) Jan 23 01:10:27.848683 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:27.848754 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:27.869787 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:27.869867 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:27.879508 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:27.881192 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:10:27.885642 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:10:27.929498 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:10:27.932850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:10:27.974353 systemd-networkd[1127]: lo: Link UP Jan 23 01:10:27.974366 systemd-networkd[1127]: lo: Gained carrier Jan 23 01:10:27.975548 systemd-networkd[1127]: Enumeration completed Jan 23 01:10:27.975870 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:27.975874 systemd-networkd[1127]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:10:27.976792 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:10:27.978005 systemd[1]: Reached target network.target - Network. Jan 23 01:10:27.978927 systemd-networkd[1127]: eth0: Link UP Jan 23 01:10:27.978932 systemd-networkd[1127]: eth0: Gained carrier Jan 23 01:10:27.978945 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:27.992594 systemd-networkd[1127]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:10:28.172596 ignition[1074]: Ignition 2.22.0 Jan 23 01:10:28.172610 ignition[1074]: Stage: fetch-offline Jan 23 01:10:28.172805 ignition[1074]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:28.172813 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:28.173110 ignition[1074]: Ignition finished successfully Jan 23 01:10:28.174987 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:10:28.176599 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:10:28.208592 ignition[1137]: Ignition 2.22.0 Jan 23 01:10:28.208616 ignition[1137]: Stage: fetch Jan 23 01:10:28.209003 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:28.209015 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:28.209121 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:28.221952 ignition[1137]: PUT result: OK Jan 23 01:10:28.224505 ignition[1137]: parsed url from cmdline: "" Jan 23 01:10:28.224513 ignition[1137]: no config URL provided Jan 23 01:10:28.224521 ignition[1137]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:10:28.224533 ignition[1137]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:10:28.224551 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:28.225545 ignition[1137]: PUT result: OK Jan 23 01:10:28.225602 ignition[1137]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 01:10:28.226793 ignition[1137]: GET result: OK Jan 23 01:10:28.226844 ignition[1137]: parsing config with SHA512: b03e47505618b9145be147af926841298f19e6a149c4f1c2872e58e41ac460b2c4dd58783773ef680c24662e6c2af5d997ed2bafc1d878329d7cfae4bd9f5639 Jan 23 01:10:28.229382 unknown[1137]: fetched base config from "system" Jan 23 01:10:28.229395 unknown[1137]: fetched base config from "system" Jan 23 01:10:28.229652 ignition[1137]: fetch: fetch complete Jan 23 01:10:28.229400 unknown[1137]: fetched user config from "aws" Jan 23 01:10:28.229657 ignition[1137]: fetch: fetch passed Jan 23 01:10:28.229695 ignition[1137]: Ignition finished successfully Jan 23 01:10:28.231630 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:10:28.233497 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:10:28.265392 ignition[1143]: Ignition 2.22.0 Jan 23 01:10:28.265406 ignition[1143]: Stage: kargs Jan 23 01:10:28.265718 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:28.265729 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:28.265803 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:28.266996 ignition[1143]: PUT result: OK Jan 23 01:10:28.269538 ignition[1143]: kargs: kargs passed Jan 23 01:10:28.269599 ignition[1143]: Ignition finished successfully Jan 23 01:10:28.271313 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:10:28.272697 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:10:28.307934 ignition[1149]: Ignition 2.22.0 Jan 23 01:10:28.307957 ignition[1149]: Stage: disks Jan 23 01:10:28.308351 ignition[1149]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:28.308363 ignition[1149]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:28.308511 ignition[1149]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:28.309374 ignition[1149]: PUT result: OK Jan 23 01:10:28.311854 ignition[1149]: disks: disks passed Jan 23 01:10:28.311928 ignition[1149]: Ignition finished successfully Jan 23 01:10:28.313745 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:10:28.314794 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:10:28.315420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:10:28.315799 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:10:28.316344 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:10:28.316930 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:10:28.318799 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:10:28.360844 systemd-fsck[1157]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:10:28.364279 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:10:28.366561 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:10:28.549523 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:10:28.549817 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:10:28.550928 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:10:28.554385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:10:28.557571 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:10:28.560423 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:10:28.561718 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:10:28.562566 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:10:28.567442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:10:28.569964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:10:28.587526 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1176) Jan 23 01:10:28.596894 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:28.596971 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:28.606600 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:28.606668 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:28.609419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:10:28.777202 initrd-setup-root[1200]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:10:28.796893 initrd-setup-root[1207]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:10:28.804069 initrd-setup-root[1214]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:10:28.809722 initrd-setup-root[1221]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:10:29.017640 systemd-networkd[1127]: eth0: Gained IPv6LL Jan 23 01:10:29.071327 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:10:29.073719 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:10:29.076646 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:10:29.092672 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:10:29.096652 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:29.127248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:10:29.137390 ignition[1289]: INFO : Ignition 2.22.0 Jan 23 01:10:29.137390 ignition[1289]: INFO : Stage: mount Jan 23 01:10:29.139043 ignition[1289]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:29.139043 ignition[1289]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:29.139043 ignition[1289]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:29.141733 ignition[1289]: INFO : PUT result: OK Jan 23 01:10:29.144383 ignition[1289]: INFO : mount: mount passed Jan 23 01:10:29.145842 ignition[1289]: INFO : Ignition finished successfully Jan 23 01:10:29.147097 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:10:29.148737 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:10:29.171992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:10:29.207518 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1300) Jan 23 01:10:29.213030 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:29.213109 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:29.220712 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:29.220790 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:29.224528 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:10:29.263937 ignition[1316]: INFO : Ignition 2.22.0 Jan 23 01:10:29.263937 ignition[1316]: INFO : Stage: files Jan 23 01:10:29.265701 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:29.265701 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:29.265701 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:29.267403 ignition[1316]: INFO : PUT result: OK Jan 23 01:10:29.268213 ignition[1316]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:10:29.269234 ignition[1316]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:10:29.269234 ignition[1316]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:10:29.283172 ignition[1316]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:10:29.284061 ignition[1316]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:10:29.284868 ignition[1316]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:10:29.284525 unknown[1316]: wrote ssh authorized keys file for user: core Jan 23 01:10:29.289290 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:10:29.290471 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:10:29.294570 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:10:29.295628 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:10:29.295628 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:10:29.297556 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:10:29.297556 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:10:29.297556 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:10:29.743728 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 01:10:30.151828 ignition[1316]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:10:30.153600 ignition[1316]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:10:30.153600 ignition[1316]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:10:30.153600 ignition[1316]: INFO : files: files passed Jan 23 01:10:30.153600 ignition[1316]: INFO : Ignition finished successfully Jan 23 01:10:30.154204 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:10:30.157619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:10:30.159612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:10:30.170249 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:10:30.170367 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:10:30.180239 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:30.181903 initrd-setup-root-after-ignition[1347]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:30.181903 initrd-setup-root-after-ignition[1347]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:30.182758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:10:30.183719 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:10:30.185612 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:10:30.233080 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:10:30.233205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:10:30.234279 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:10:30.235468 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:10:30.235927 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:10:30.236788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:10:30.270836 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:10:30.273196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:10:30.298175 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:30.298995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:30.300090 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:10:30.301063 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:10:30.301292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:10:30.302540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:10:30.303451 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:10:30.304326 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:10:30.305153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:10:30.305974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:10:30.306814 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:10:30.307682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:10:30.308460 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:10:30.309287 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:10:30.310615 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:10:30.311440 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:10:30.312185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:10:30.312409 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:10:30.313522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:30.314445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:30.315176 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:10:30.315313 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:30.316006 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:10:30.316221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:10:30.317603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:10:30.317853 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:10:30.318652 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:10:30.318809 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:10:30.321618 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:10:30.322290 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:10:30.322632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:30.327779 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:10:30.329104 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:10:30.329942 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:30.332226 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:10:30.332384 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:10:30.342764 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:10:30.343617 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:10:30.364442 ignition[1371]: INFO : Ignition 2.22.0 Jan 23 01:10:30.364442 ignition[1371]: INFO : Stage: umount Jan 23 01:10:30.367017 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:30.367017 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:30.367017 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:30.368760 ignition[1371]: INFO : PUT result: OK Jan 23 01:10:30.370120 ignition[1371]: INFO : umount: umount passed Jan 23 01:10:30.372103 ignition[1371]: INFO : Ignition finished successfully Jan 23 01:10:30.372864 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:10:30.373533 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:10:30.374727 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:10:30.374808 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:10:30.375792 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:10:30.375861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:10:30.377841 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:10:30.377912 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:10:30.378504 systemd[1]: Stopped target network.target - Network. Jan 23 01:10:30.378948 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:10:30.379026 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:10:30.379687 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:10:30.380315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:10:30.384587 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:30.385638 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:10:30.385963 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:10:30.386998 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:10:30.387044 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:10:30.387597 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:10:30.387632 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:10:30.388140 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:10:30.388198 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:10:30.390611 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:10:30.390665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:10:30.391610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:10:30.392742 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:10:30.394538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:10:30.398796 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:10:30.398903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:10:30.402148 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:10:30.402461 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:10:30.402576 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:10:30.404982 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:10:30.406851 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:10:30.407265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:10:30.407305 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:30.408753 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:10:30.409060 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:10:30.409107 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:10:30.409471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:10:30.409525 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:30.411607 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:10:30.411652 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:30.412018 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:10:30.412063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:30.414795 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:30.416777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:10:30.416848 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:30.428163 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:10:30.429677 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:30.431208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:10:30.431251 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:30.431675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:10:30.431706 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:30.432112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:10:30.432159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:10:30.433992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:10:30.434119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:10:30.435024 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:10:30.435090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:10:30.440055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:10:30.441931 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:10:30.442013 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:30.444027 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:10:30.444085 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:30.445282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:30.445341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:30.448309 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:10:30.448398 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:10:30.448452 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:30.448908 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:10:30.452741 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:10:30.462009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:10:30.462167 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:10:30.647987 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:10:30.648097 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:10:30.649972 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:10:30.650718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:10:30.650801 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:10:30.652713 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:10:30.671793 systemd[1]: Switching root. Jan 23 01:10:30.719559 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 01:10:30.719650 systemd-journald[187]: Journal stopped Jan 23 01:10:32.550315 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:10:32.550410 kernel: SELinux: policy capability open_perms=1 Jan 23 01:10:32.550437 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:10:32.550462 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:10:32.551503 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:10:32.551522 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:10:32.551539 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:10:32.551561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:10:32.551583 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:10:32.551602 kernel: audit: type=1403 audit(1769130631.262:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:10:32.551623 systemd[1]: Successfully loaded SELinux policy in 76.185ms. Jan 23 01:10:32.551648 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.262ms. Jan 23 01:10:32.551667 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:10:32.551688 systemd[1]: Detected virtualization amazon. Jan 23 01:10:32.551706 systemd[1]: Detected architecture x86-64. Jan 23 01:10:32.551725 systemd[1]: Detected first boot. Jan 23 01:10:32.551744 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:10:32.551765 zram_generator::config[1414]: No configuration found. Jan 23 01:10:32.551785 kernel: Guest personality initialized and is inactive Jan 23 01:10:32.551803 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:10:32.551820 kernel: Initialized host personality Jan 23 01:10:32.551839 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:10:32.551856 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:10:32.551875 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:10:32.551892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:10:32.551912 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:10:32.551933 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:32.551953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:10:32.551976 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:10:32.551997 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:10:32.552019 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:10:32.552040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:10:32.552057 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:10:32.552075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:10:32.552095 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:10:32.552115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:32.552137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:32.552158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:10:32.552177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:10:32.552199 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:10:32.552218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:10:32.552236 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:10:32.552253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:32.552271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:32.552291 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:10:32.552310 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:10:32.552331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:10:32.552351 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:10:32.552370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:32.552392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:10:32.552413 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:10:32.552433 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:10:32.552452 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:10:32.552470 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:10:32.554889 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:10:32.554925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:32.554946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:32.554967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:32.554988 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:10:32.555007 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:10:32.555034 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:10:32.555053 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:10:32.555073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:32.555092 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:10:32.555113 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:10:32.555130 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:10:32.555153 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:10:32.555171 systemd[1]: Reached target machines.target - Containers. Jan 23 01:10:32.555190 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:10:32.555211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:32.555232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:10:32.555253 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:10:32.555273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:10:32.555296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:10:32.555317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:10:32.555337 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:10:32.555360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:10:32.555381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:10:32.555400 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:10:32.555421 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:10:32.555441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:10:32.555464 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:10:32.555525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:32.555543 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:10:32.555562 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:10:32.555579 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:10:32.555596 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:10:32.555618 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:10:32.555638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:10:32.555657 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:10:32.555675 systemd[1]: Stopped verity-setup.service. Jan 23 01:10:32.555698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:32.555720 kernel: loop: module loaded Jan 23 01:10:32.555741 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:10:32.555762 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:10:32.555783 kernel: fuse: init (API version 7.41) Jan 23 01:10:32.555803 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:10:32.555823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:10:32.555845 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:10:32.555866 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:10:32.555890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:32.555911 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:10:32.555974 systemd-journald[1500]: Collecting audit messages is disabled. Jan 23 01:10:32.556024 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:10:32.556051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:10:32.556070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:10:32.556094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:10:32.556115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:10:32.556137 systemd-journald[1500]: Journal started Jan 23 01:10:32.556177 systemd-journald[1500]: Runtime Journal (/run/log/journal/ec299804e1c7c31b773be9b957ddf576) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:10:32.184489 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:10:32.560625 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:10:32.196760 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 01:10:32.197275 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:10:32.562390 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:10:32.562677 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:10:32.565076 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:10:32.565296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:10:32.566686 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:32.567789 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:32.569713 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:10:32.588151 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:10:32.597597 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:10:32.604600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:10:32.607582 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:10:32.607631 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:10:32.614636 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:10:32.620919 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:10:32.622717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:32.624844 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:10:32.630898 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:10:32.632614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:10:32.636756 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:10:32.638633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:10:32.647591 kernel: ACPI: bus type drm_connector registered Jan 23 01:10:32.644649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:32.652670 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:10:32.659718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:10:32.660827 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:10:32.661069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:10:32.662639 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:10:32.665021 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:10:32.667080 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:10:32.680745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:10:32.688647 systemd-journald[1500]: Time spent on flushing to /var/log/journal/ec299804e1c7c31b773be9b957ddf576 is 125.790ms for 1004 entries. Jan 23 01:10:32.688647 systemd-journald[1500]: System Journal (/var/log/journal/ec299804e1c7c31b773be9b957ddf576) is 8M, max 195.6M, 187.6M free. Jan 23 01:10:32.846115 systemd-journald[1500]: Received client request to flush runtime journal. Jan 23 01:10:32.846176 kernel: loop0: detected capacity change from 0 to 72368 Jan 23 01:10:32.846202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:10:32.694834 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:10:32.695677 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:10:32.702724 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:10:32.722610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:32.803853 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:10:32.840453 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:10:32.844209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:10:32.854609 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:10:32.871170 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:32.877537 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 01:10:32.910698 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jan 23 01:10:32.911095 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Jan 23 01:10:32.916934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:32.984514 kernel: loop2: detected capacity change from 0 to 224512 Jan 23 01:10:33.119526 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:10:33.180524 kernel: loop4: detected capacity change from 0 to 72368 Jan 23 01:10:33.205215 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:10:33.230765 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:10:33.260730 kernel: loop6: detected capacity change from 0 to 224512 Jan 23 01:10:33.300747 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:10:33.337003 (sd-merge)[1576]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 01:10:33.337983 (sd-merge)[1576]: Merged extensions into '/usr'. Jan 23 01:10:33.349693 systemd[1]: Reload requested from client PID 1545 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:10:33.349722 systemd[1]: Reloading... Jan 23 01:10:33.500506 zram_generator::config[1601]: No configuration found. Jan 23 01:10:33.853240 systemd[1]: Reloading finished in 502 ms. Jan 23 01:10:33.869585 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:10:33.877969 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:33.883168 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:10:33.890825 systemd[1]: Starting ensure-sysext.service... Jan 23 01:10:33.895045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:10:33.913254 systemd[1]: Reload requested from client PID 1656 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:10:33.913269 systemd[1]: Reloading... Jan 23 01:10:33.927840 systemd-udevd[1654]: Using default interface naming scheme 'v255'. Jan 23 01:10:33.932828 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:10:33.934951 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:10:33.935384 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:10:33.936760 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:10:33.937756 systemd-tmpfiles[1657]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:10:33.940104 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 23 01:10:33.940271 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 23 01:10:33.948110 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:10:33.948128 systemd-tmpfiles[1657]: Skipping /boot Jan 23 01:10:33.951337 ldconfig[1540]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:10:33.966726 systemd-tmpfiles[1657]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:10:33.966888 systemd-tmpfiles[1657]: Skipping /boot Jan 23 01:10:33.994522 zram_generator::config[1682]: No configuration found. Jan 23 01:10:34.240736 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:34.334510 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:10:34.359502 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:10:34.365230 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:10:34.368126 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 01:10:34.368538 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 23 01:10:34.370503 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 01:10:34.515741 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:10:34.517195 systemd[1]: Reloading finished in 603 ms. Jan 23 01:10:34.529257 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:34.532516 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:10:34.534470 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:34.573851 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:34.580919 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:10:34.587388 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:10:34.595043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:10:34.606027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:10:34.617588 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:10:34.630043 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.630360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:34.638608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:10:34.642157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:10:34.658922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:10:34.659762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:34.659950 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:34.660100 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.669467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:10:34.670786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:10:34.673412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.675585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:34.675855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:34.676015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:34.676155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:10:34.681470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:10:34.682550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.693818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.694250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:34.701334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:10:34.714304 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:10:34.715776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:34.715836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:34.715937 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:10:34.717436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:34.720672 systemd[1]: Finished ensure-sysext.service. Jan 23 01:10:34.738405 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:10:34.749122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:10:34.749931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:10:34.752312 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:10:34.753058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:10:34.757412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:10:34.767114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:10:34.768324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:10:34.771087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:10:34.779146 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:10:34.779707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:10:34.797534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:10:34.800999 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:10:34.832442 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:10:34.833360 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:10:34.856696 augenrules[1882]: No rules Jan 23 01:10:34.860398 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:10:34.862273 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:34.863763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:34.915573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:35.010519 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:10:35.013638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:10:35.024986 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:10:35.061955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:10:35.086328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:35.138113 systemd-resolved[1801]: Positive Trust Anchors: Jan 23 01:10:35.138129 systemd-resolved[1801]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:10:35.138166 systemd-resolved[1801]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:10:35.144732 systemd-resolved[1801]: Defaulting to hostname 'linux'. Jan 23 01:10:35.146282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:10:35.146884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:35.147330 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:10:35.148089 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:10:35.148634 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:10:35.149044 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:10:35.149561 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:10:35.150020 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:10:35.150413 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:10:35.150802 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:10:35.150873 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:10:35.151216 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:10:35.154536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:10:35.156126 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:10:35.156945 systemd-networkd[1797]: lo: Link UP Jan 23 01:10:35.156959 systemd-networkd[1797]: lo: Gained carrier Jan 23 01:10:35.158466 systemd-networkd[1797]: Enumeration completed Jan 23 01:10:35.159125 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:10:35.159795 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:10:35.160242 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:10:35.164950 systemd-networkd[1797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:35.165202 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:10:35.165636 systemd-networkd[1797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:10:35.166998 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:10:35.168214 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:10:35.168946 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:10:35.169462 systemd-networkd[1797]: eth0: Link UP Jan 23 01:10:35.169736 systemd-networkd[1797]: eth0: Gained carrier Jan 23 01:10:35.170344 systemd-networkd[1797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:35.170351 systemd[1]: Reached target network.target - Network. Jan 23 01:10:35.170926 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:10:35.171297 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:10:35.172213 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:10:35.172248 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:10:35.173367 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:10:35.175169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:10:35.178779 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:10:35.181579 systemd-networkd[1797]: eth0: DHCPv4 address 172.31.20.240/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:10:35.186299 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:10:35.188527 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:10:35.191490 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:10:35.191892 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:10:35.194669 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:10:35.201689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:10:35.204686 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:35.208710 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 01:10:35.215722 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:10:35.225117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:10:35.230854 jq[1938]: false Jan 23 01:10:35.236746 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:10:35.243868 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:10:35.251499 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Refreshing passwd entry cache Jan 23 01:10:35.251073 oslogin_cache_refresh[1940]: Refreshing passwd entry cache Jan 23 01:10:35.252736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:10:35.256631 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:10:35.258792 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:10:35.264730 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:10:35.268012 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:10:35.299077 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:10:35.301094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:10:35.301357 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:10:35.305078 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:10:35.305359 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:10:35.312931 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Failure getting users, quitting Jan 23 01:10:35.312931 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:10:35.312931 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Refreshing group entry cache Jan 23 01:10:35.311752 oslogin_cache_refresh[1940]: Failure getting users, quitting Jan 23 01:10:35.311776 oslogin_cache_refresh[1940]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:10:35.311835 oslogin_cache_refresh[1940]: Refreshing group entry cache Jan 23 01:10:35.316219 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Failure getting groups, quitting Jan 23 01:10:35.316219 google_oslogin_nss_cache[1940]: oslogin_cache_refresh[1940]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:10:35.315314 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:10:35.313805 oslogin_cache_refresh[1940]: Failure getting groups, quitting Jan 23 01:10:35.315668 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:10:35.313818 oslogin_cache_refresh[1940]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:10:35.338415 extend-filesystems[1939]: Found /dev/nvme0n1p6 Jan 23 01:10:35.340047 jq[1955]: true Jan 23 01:10:35.364996 ntpd[1942]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: ---------------------------------------------------- Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: corporation. Support and training for ntp-4 are Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: available at https://www.nwtime.org/support Jan 23 01:10:35.366422 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: ---------------------------------------------------- Jan 23 01:10:35.365073 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:35.365083 ntpd[1942]: ---------------------------------------------------- Jan 23 01:10:35.365093 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:35.365101 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:35.365110 ntpd[1942]: corporation. Support and training for ntp-4 are Jan 23 01:10:35.365119 ntpd[1942]: available at https://www.nwtime.org/support Jan 23 01:10:35.365128 ntpd[1942]: ---------------------------------------------------- Jan 23 01:10:35.373602 ntpd[1942]: proto: precision = 0.060 usec (-24) Jan 23 01:10:35.373904 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: proto: precision = 0.060 usec (-24) Jan 23 01:10:35.375142 ntpd[1942]: basedate set to 2026-01-10 Jan 23 01:10:35.375264 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: basedate set to 2026-01-10 Jan 23 01:10:35.375316 ntpd[1942]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:35.386270 kernel: ntpd[1942]: segfault at 24 ip 000055cfec4d6aeb sp 00007ffcfafc2a10 error 4 in ntpd[68aeb,55cfec474000+80000] likely on CPU 1 (core 0, socket 0) Jan 23 01:10:35.386356 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: bind(21) AF_INET6 [fe80::474:66ff:fed9:7225%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:35.386383 ntpd[1942]: 23 Jan 01:10:35 ntpd[1942]: unable to create socket on eth0 (5) for [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:35.375554 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:35.375582 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:35.377201 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:35.377236 ntpd[1942]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:35.377271 ntpd[1942]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:35.377308 ntpd[1942]: bind(21) AF_INET6 [fe80::474:66ff:fed9:7225%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:35.377331 ntpd[1942]: unable to create socket on eth0 (5) for [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:35.388993 extend-filesystems[1939]: Found /dev/nvme0n1p9 Jan 23 01:10:35.406008 jq[1968]: true Jan 23 01:10:35.411865 systemd-coredump[1980]: Process 1942 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:10:35.421866 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:10:35.432807 update_engine[1951]: I20260123 01:10:35.429384 1951 main.cc:92] Flatcar Update Engine starting Jan 23 01:10:35.434117 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 01:10:35.443511 extend-filesystems[1939]: Checking size of /dev/nvme0n1p9 Jan 23 01:10:35.449853 systemd[1]: Started systemd-coredump@0-1980-0.service - Process Core Dump (PID 1980/UID 0). Jan 23 01:10:35.455713 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 01:10:35.456741 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:10:35.473808 systemd-logind[1946]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:10:35.473832 systemd-logind[1946]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 01:10:35.473856 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:10:35.474180 systemd-logind[1946]: New seat seat0. Jan 23 01:10:35.478402 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:10:35.480831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:10:35.483897 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:10:35.491656 dbus-daemon[1936]: [system] SELinux support is enabled Jan 23 01:10:35.492560 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:10:35.504923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:10:35.509198 extend-filesystems[1939]: Resized partition /dev/nvme0n1p9 Jan 23 01:10:35.505074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:10:35.506256 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:10:35.506283 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:10:35.521374 extend-filesystems[1997]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:10:35.524991 coreos-metadata[1935]: Jan 23 01:10:35.521 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:10:35.523176 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1797 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:10:35.526519 coreos-metadata[1935]: Jan 23 01:10:35.526 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 01:10:35.532493 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 01:10:35.529319 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.529 INFO Fetch successful Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.529 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.530 INFO Fetch successful Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.530 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.531 INFO Fetch successful Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.532 INFO Fetch successful Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.532 INFO Fetch failed with 404: resource not found Jan 23 01:10:35.532672 coreos-metadata[1935]: Jan 23 01:10:35.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 01:10:35.534491 coreos-metadata[1935]: Jan 23 01:10:35.533 INFO Fetch successful Jan 23 01:10:35.534491 coreos-metadata[1935]: Jan 23 01:10:35.533 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 01:10:35.534491 coreos-metadata[1935]: Jan 23 01:10:35.534 INFO Fetch successful Jan 23 01:10:35.534491 coreos-metadata[1935]: Jan 23 01:10:35.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 01:10:35.535854 coreos-metadata[1935]: Jan 23 01:10:35.534 INFO Fetch successful Jan 23 01:10:35.535854 coreos-metadata[1935]: Jan 23 01:10:35.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 01:10:35.543510 coreos-metadata[1935]: Jan 23 01:10:35.539 INFO Fetch successful Jan 23 01:10:35.543510 coreos-metadata[1935]: Jan 23 01:10:35.539 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 01:10:35.543510 coreos-metadata[1935]: Jan 23 01:10:35.541 INFO Fetch successful Jan 23 01:10:35.544843 update_engine[1951]: I20260123 01:10:35.544625 1951 update_check_scheduler.cc:74] Next update check in 3m20s Jan 23 01:10:35.545362 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:10:35.547072 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:10:35.559064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:10:35.649889 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:10:35.651097 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:10:35.758583 bash[2016]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:10:35.760316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:10:35.766993 systemd[1]: Starting sshkeys.service... Jan 23 01:10:35.779582 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 01:10:35.802851 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 01:10:35.802851 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 01:10:35.802851 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 01:10:35.831631 extend-filesystems[1939]: Resized filesystem in /dev/nvme0n1p9 Jan 23 01:10:35.804106 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:10:35.804433 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:10:35.836602 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:10:35.840774 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:10:35.908778 systemd-coredump[1990]: Process 1942 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1942: #0 0x000055cfec4d6aeb n/a (ntpd + 0x68aeb) #1 0x000055cfec47fcdf n/a (ntpd + 0x11cdf) #2 0x000055cfec480575 n/a (ntpd + 0x12575) #3 0x000055cfec47bd8a n/a (ntpd + 0xdd8a) #4 0x000055cfec47d5d3 n/a (ntpd + 0xf5d3) #5 0x000055cfec485fd1 n/a (ntpd + 0x17fd1) #6 0x000055cfec476c2d n/a (ntpd + 0x8c2d) #7 0x00007f4e1ff6c16c n/a (libc.so.6 + 0x2716c) #8 0x00007f4e1ff6c229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055cfec476c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:10:35.914668 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:10:35.917799 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:10:35.917985 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:10:35.925972 systemd[1]: systemd-coredump@0-1980-0.service: Deactivated successfully. Jan 23 01:10:35.930734 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:10:35.943983 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2001 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:10:35.955328 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:10:36.026045 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:36.029559 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:36.132655 coreos-metadata[2032]: Jan 23 01:10:36.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:10:36.132655 coreos-metadata[2032]: Jan 23 01:10:36.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 01:10:36.132655 coreos-metadata[2032]: Jan 23 01:10:36.132 INFO Fetch successful Jan 23 01:10:36.132655 coreos-metadata[2032]: Jan 23 01:10:36.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:10:36.132655 coreos-metadata[2032]: Jan 23 01:10:36.132 INFO Fetch successful Jan 23 01:10:36.139961 unknown[2032]: wrote ssh authorized keys file for user: core Jan 23 01:10:36.166248 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:10:36.176686 ntpd[2088]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: ---------------------------------------------------- Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: available at https://www.nwtime.org/support Jan 23 01:10:36.179856 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: ---------------------------------------------------- Jan 23 01:10:36.176766 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:36.176778 ntpd[2088]: ---------------------------------------------------- Jan 23 01:10:36.176787 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:36.176796 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:36.176805 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 23 01:10:36.176814 ntpd[2088]: available at https://www.nwtime.org/support Jan 23 01:10:36.176823 ntpd[2088]: ---------------------------------------------------- Jan 23 01:10:36.190847 kernel: ntpd[2088]: segfault at 24 ip 000056394bd5caeb sp 00007ffec54b5c80 error 4 in ntpd[68aeb,56394bcfa000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 01:10:36.190954 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:10:36.181623 ntpd[2088]: proto: precision = 0.067 usec (-24) Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: proto: precision = 0.067 usec (-24) Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: basedate set to 2026-01-10 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: bind(21) AF_INET6 [fe80::474:66ff:fed9:7225%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:36.191069 ntpd[2088]: 23 Jan 01:10:36 ntpd[2088]: unable to create socket on eth0 (5) for [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:36.182662 ntpd[2088]: basedate set to 2026-01-10 Jan 23 01:10:36.182682 ntpd[2088]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:36.182783 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:36.182811 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:36.182991 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:36.183018 ntpd[2088]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:36.183048 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:36.183078 ntpd[2088]: bind(21) AF_INET6 [fe80::474:66ff:fed9:7225%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:36.183097 ntpd[2088]: unable to create socket on eth0 (5) for [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:36.202663 update-ssh-keys[2110]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:10:36.198324 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:10:36.206388 systemd[1]: Finished sshkeys.service. Jan 23 01:10:36.230657 systemd-coredump[2132]: Process 2088 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:10:36.239853 systemd[1]: Started systemd-coredump@1-2132-0.service - Process Core Dump (PID 2132/UID 0). Jan 23 01:10:36.283753 locksmithd[2002]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:10:36.310828 containerd[1977]: time="2026-01-23T01:10:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:10:36.314947 containerd[1977]: time="2026-01-23T01:10:36.312449414Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.350241994Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.4µs" Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.350290425Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.350316443Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.352037448Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.352639611Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:10:36.352795 containerd[1977]: time="2026-01-23T01:10:36.352793817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.353527680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.353554112Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.353913186Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.353935677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.354522370Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.354546082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.354695653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:10:36.355500 containerd[1977]: time="2026-01-23T01:10:36.355012026Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:10:36.358503 containerd[1977]: time="2026-01-23T01:10:36.356463578Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:10:36.358503 containerd[1977]: time="2026-01-23T01:10:36.356524303Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:10:36.358503 containerd[1977]: time="2026-01-23T01:10:36.356595778Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:10:36.358503 containerd[1977]: time="2026-01-23T01:10:36.357109875Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:10:36.358503 containerd[1977]: time="2026-01-23T01:10:36.357218856Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:10:36.363338 containerd[1977]: time="2026-01-23T01:10:36.363215906Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:10:36.363338 containerd[1977]: time="2026-01-23T01:10:36.363289813Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363515973Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363598898Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363623977Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363640877Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363658042Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363675604Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363693250Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363719215Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:10:36.363763 containerd[1977]: time="2026-01-23T01:10:36.363733895Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364131656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364308246Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364335349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364356262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364375885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364391187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364404659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364426344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364439051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364455218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:10:36.364507 containerd[1977]: time="2026-01-23T01:10:36.364471057Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:10:36.364949 containerd[1977]: time="2026-01-23T01:10:36.364931416Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:10:36.365080 containerd[1977]: time="2026-01-23T01:10:36.365065397Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:10:36.365147 containerd[1977]: time="2026-01-23T01:10:36.365136356Z" level=info msg="Start snapshots syncer" Jan 23 01:10:36.365272 containerd[1977]: time="2026-01-23T01:10:36.365257292Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:10:36.366692 containerd[1977]: time="2026-01-23T01:10:36.366632791Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:10:36.368820 containerd[1977]: time="2026-01-23T01:10:36.368554930Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374603848Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374834701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374887230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374905813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374920425Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374938286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374965228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.374988790Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.375023339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.375039381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:10:36.375117 containerd[1977]: time="2026-01-23T01:10:36.375061207Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.375644795Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376570370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376590737Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376604871Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376617129Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376652785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376674749Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376697008Z" level=info msg="runtime interface created" Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376703859Z" level=info msg="created NRI interface" Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376715417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376735264Z" level=info msg="Connect containerd service" Jan 23 01:10:36.376811 containerd[1977]: time="2026-01-23T01:10:36.376770856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:10:36.384295 containerd[1977]: time="2026-01-23T01:10:36.380310668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:10:36.397301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:10:36.406032 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:10:36.433922 polkitd[2074]: Started polkitd version 126 Jan 23 01:10:36.442891 systemd-networkd[1797]: eth0: Gained IPv6LL Jan 23 01:10:36.451741 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:10:36.453066 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:10:36.456766 polkitd[2074]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:10:36.458325 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 01:10:36.463819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:36.474803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:10:36.476418 polkitd[2074]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:10:36.476514 polkitd[2074]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:10:36.476957 polkitd[2074]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:10:36.476993 polkitd[2074]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:10:36.477042 polkitd[2074]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:10:36.485294 polkitd[2074]: Finished loading, compiling and executing 2 rules Jan 23 01:10:36.494920 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:10:36.498996 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:10:36.503962 polkitd[2074]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:10:36.514316 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:10:36.516587 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:10:36.522200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:10:36.544668 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:10:36.613563 systemd-hostnamed[2001]: Hostname set to (transient) Jan 23 01:10:36.614739 systemd-resolved[1801]: System hostname changed to 'ip-172-31-20-240'. Jan 23 01:10:36.620704 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:10:36.626284 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:10:36.632395 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:10:36.634871 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:10:36.650954 amazon-ssm-agent[2166]: Initializing new seelog logger Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 processing appconfig overrides Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 processing appconfig overrides Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 processing appconfig overrides Jan 23 01:10:36.653511 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6524 INFO Proxy environment variables: Jan 23 01:10:36.656448 systemd-coredump[2136]: Process 2088 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2088: #0 0x000056394bd5caeb n/a (ntpd + 0x68aeb) #1 0x000056394bd05cdf n/a (ntpd + 0x11cdf) #2 0x000056394bd06575 n/a (ntpd + 0x12575) #3 0x000056394bd01d8a n/a (ntpd + 0xdd8a) #4 0x000056394bd035d3 n/a (ntpd + 0xf5d3) #5 0x000056394bd0bfd1 n/a (ntpd + 0x17fd1) #6 0x000056394bcfcc2d n/a (ntpd + 0x8c2d) #7 0x00007f1ce499916c n/a (libc.so.6 + 0x2716c) #8 0x00007f1ce4999229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056394bcfcc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:10:36.660097 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:10:36.663673 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.663673 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.663673 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 processing appconfig overrides Jan 23 01:10:36.660289 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:10:36.667954 systemd[1]: systemd-coredump@1-2132-0.service: Deactivated successfully. Jan 23 01:10:36.753504 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6524 INFO https_proxy: Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.784715124Z" level=info msg="Start subscribing containerd event" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.784790094Z" level=info msg="Start recovering state" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.784959989Z" level=info msg="Start event monitor" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.784976952Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.784988150Z" level=info msg="Start streaming server" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.785020692Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.785030813Z" level=info msg="runtime interface starting up..." Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.785039145Z" level=info msg="starting plugins..." Jan 23 01:10:36.785174 containerd[1977]: time="2026-01-23T01:10:36.785055077Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:10:36.786774 containerd[1977]: time="2026-01-23T01:10:36.786126807Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:10:36.786774 containerd[1977]: time="2026-01-23T01:10:36.786197861Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:10:36.786387 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:10:36.787638 containerd[1977]: time="2026-01-23T01:10:36.787531142Z" level=info msg="containerd successfully booted in 0.479117s" Jan 23 01:10:36.789182 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 01:10:36.794349 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:36.841306 ntpd[2216]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:36.841383 ntpd[2216]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: ---------------------------------------------------- Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: corporation. Support and training for ntp-4 are Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: available at https://www.nwtime.org/support Jan 23 01:10:36.841817 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: ---------------------------------------------------- Jan 23 01:10:36.841392 ntpd[2216]: ---------------------------------------------------- Jan 23 01:10:36.842427 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: proto: precision = 0.095 usec (-23) Jan 23 01:10:36.841401 ntpd[2216]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:36.841410 ntpd[2216]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:36.841419 ntpd[2216]: corporation. Support and training for ntp-4 are Jan 23 01:10:36.842631 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: basedate set to 2026-01-10 Jan 23 01:10:36.842631 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:36.841427 ntpd[2216]: available at https://www.nwtime.org/support Jan 23 01:10:36.842744 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:36.842744 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:36.841436 ntpd[2216]: ---------------------------------------------------- Jan 23 01:10:36.842234 ntpd[2216]: proto: precision = 0.095 usec (-23) Jan 23 01:10:36.842981 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:36.842981 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:36.842981 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:36.842981 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listen normally on 5 eth0 [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:36.842981 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: Listening on routing socket on fd #22 for interface updates Jan 23 01:10:36.842523 ntpd[2216]: basedate set to 2026-01-10 Jan 23 01:10:36.845366 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:36.845366 ntpd[2216]: 23 Jan 01:10:36 ntpd[2216]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:36.842538 ntpd[2216]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:36.842633 ntpd[2216]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:36.842662 ntpd[2216]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:36.842850 ntpd[2216]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:36.842878 ntpd[2216]: Listen normally on 3 eth0 172.31.20.240:123 Jan 23 01:10:36.842909 ntpd[2216]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:36.842939 ntpd[2216]: Listen normally on 5 eth0 [fe80::474:66ff:fed9:7225%2]:123 Jan 23 01:10:36.842966 ntpd[2216]: Listening on routing socket on fd #22 for interface updates Jan 23 01:10:36.844598 ntpd[2216]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:36.844627 ntpd[2216]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:36.851851 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6524 INFO http_proxy: Jan 23 01:10:36.931591 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.931755 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:36.931937 amazon-ssm-agent[2166]: 2026/01/23 01:10:36 processing appconfig overrides Jan 23 01:10:36.949551 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6524 INFO no_proxy: Jan 23 01:10:36.973011 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6525 INFO Checking if agent identity type OnPrem can be assumed Jan 23 01:10:36.973011 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.6527 INFO Checking if agent identity type EC2 can be assumed Jan 23 01:10:36.973011 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7354 INFO Agent will take identity from EC2 Jan 23 01:10:36.973011 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7369 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 01:10:36.973011 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7369 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7369 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7369 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7370 INFO [Registrar] Starting registrar module Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7382 INFO [EC2Identity] Checking disk for registration info Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7382 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.7382 INFO [EC2Identity] Generating registration keypair Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.8888 INFO [EC2Identity] Checking write access before registering Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.8892 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9313 INFO [EC2Identity] EC2 registration was successful. Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9313 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9314 INFO [CredentialRefresher] credentialRefresher has started Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9314 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9727 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 01:10:36.973236 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9729 INFO [CredentialRefresher] Credentials ready Jan 23 01:10:37.047142 amazon-ssm-agent[2166]: 2026-01-23 01:10:36.9731 INFO [CredentialRefresher] Next credential rotation will be in 29.999994515416667 minutes Jan 23 01:10:37.984780 amazon-ssm-agent[2166]: 2026-01-23 01:10:37.9846 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 01:10:38.085648 amazon-ssm-agent[2166]: 2026-01-23 01:10:37.9864 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2223) started Jan 23 01:10:38.186763 amazon-ssm-agent[2166]: 2026-01-23 01:10:37.9865 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 01:10:38.644616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:38.645514 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:10:38.647588 systemd[1]: Startup finished in 2.734s (kernel) + 6.599s (initrd) + 7.459s (userspace) = 16.794s. Jan 23 01:10:38.655448 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:38.787772 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:10:38.789298 systemd[1]: Started sshd@0-172.31.20.240:22-68.220.241.50:60538.service - OpenSSH per-connection server daemon (68.220.241.50:60538). Jan 23 01:10:39.307525 sshd[2245]: Accepted publickey for core from 68.220.241.50 port 60538 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:39.308332 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:39.315215 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:10:39.316127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:10:39.327671 systemd-logind[1946]: New session 1 of user core. Jan 23 01:10:39.343306 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:10:39.348521 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:10:39.364580 (systemd)[2254]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:10:39.367602 systemd-logind[1946]: New session c1 of user core. Jan 23 01:10:39.572490 systemd[2254]: Queued start job for default target default.target. Jan 23 01:10:39.579896 systemd[2254]: Created slice app.slice - User Application Slice. Jan 23 01:10:39.579940 systemd[2254]: Reached target paths.target - Paths. Jan 23 01:10:39.580089 systemd[2254]: Reached target timers.target - Timers. Jan 23 01:10:39.582654 systemd[2254]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:10:39.598290 systemd[2254]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:10:39.598596 systemd[2254]: Reached target sockets.target - Sockets. Jan 23 01:10:39.598806 systemd[2254]: Reached target basic.target - Basic System. Jan 23 01:10:39.598851 systemd[2254]: Reached target default.target - Main User Target. Jan 23 01:10:39.598880 systemd[2254]: Startup finished in 222ms. Jan 23 01:10:39.598949 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:10:39.605738 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:10:39.759618 kubelet[2239]: E0123 01:10:39.759540 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:39.762250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:39.762394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:39.762736 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 267.3M memory peak. Jan 23 01:10:39.973270 systemd[1]: Started sshd@1-172.31.20.240:22-68.220.241.50:60546.service - OpenSSH per-connection server daemon (68.220.241.50:60546). Jan 23 01:10:40.465598 sshd[2267]: Accepted publickey for core from 68.220.241.50 port 60546 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:40.467145 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:40.473926 systemd-logind[1946]: New session 2 of user core. Jan 23 01:10:40.481724 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:10:40.814634 sshd[2270]: Connection closed by 68.220.241.50 port 60546 Jan 23 01:10:40.815190 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:40.819378 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:10:40.820034 systemd[1]: sshd@1-172.31.20.240:22-68.220.241.50:60546.service: Deactivated successfully. Jan 23 01:10:40.821952 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:10:40.824142 systemd-logind[1946]: Removed session 2. Jan 23 01:10:40.902282 systemd[1]: Started sshd@2-172.31.20.240:22-68.220.241.50:60560.service - OpenSSH per-connection server daemon (68.220.241.50:60560). Jan 23 01:10:41.391907 sshd[2276]: Accepted publickey for core from 68.220.241.50 port 60560 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:41.393277 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:41.400507 systemd-logind[1946]: New session 3 of user core. Jan 23 01:10:41.406748 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:10:41.739346 sshd[2279]: Connection closed by 68.220.241.50 port 60560 Jan 23 01:10:41.742301 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:41.746743 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:10:41.747055 systemd[1]: sshd@2-172.31.20.240:22-68.220.241.50:60560.service: Deactivated successfully. Jan 23 01:10:41.750721 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:10:41.752455 systemd-logind[1946]: Removed session 3. Jan 23 01:10:41.834363 systemd[1]: Started sshd@3-172.31.20.240:22-68.220.241.50:44276.service - OpenSSH per-connection server daemon (68.220.241.50:44276). Jan 23 01:10:42.332920 sshd[2285]: Accepted publickey for core from 68.220.241.50 port 44276 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:42.334701 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:42.340881 systemd-logind[1946]: New session 4 of user core. Jan 23 01:10:42.349774 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:10:42.687775 sshd[2288]: Connection closed by 68.220.241.50 port 44276 Jan 23 01:10:42.688284 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:42.692256 systemd[1]: sshd@3-172.31.20.240:22-68.220.241.50:44276.service: Deactivated successfully. Jan 23 01:10:42.694244 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:10:42.695427 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:10:42.696305 systemd-logind[1946]: Removed session 4. Jan 23 01:10:42.778265 systemd[1]: Started sshd@4-172.31.20.240:22-68.220.241.50:44282.service - OpenSSH per-connection server daemon (68.220.241.50:44282). Jan 23 01:10:43.267327 sshd[2294]: Accepted publickey for core from 68.220.241.50 port 44282 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:43.268720 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:43.274048 systemd-logind[1946]: New session 5 of user core. Jan 23 01:10:43.279781 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:10:43.563347 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:10:43.563703 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:43.578674 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:43.654723 sshd[2297]: Connection closed by 68.220.241.50 port 44282 Jan 23 01:10:43.655768 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:43.660287 systemd[1]: sshd@4-172.31.20.240:22-68.220.241.50:44282.service: Deactivated successfully. Jan 23 01:10:43.662199 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:10:43.663016 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:10:43.664462 systemd-logind[1946]: Removed session 5. Jan 23 01:10:43.752497 systemd[1]: Started sshd@5-172.31.20.240:22-68.220.241.50:44296.service - OpenSSH per-connection server daemon (68.220.241.50:44296). Jan 23 01:10:44.924966 systemd-resolved[1801]: Clock change detected. Flushing caches. Jan 23 01:10:45.322398 sshd[2304]: Accepted publickey for core from 68.220.241.50 port 44296 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:45.323841 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:45.331814 systemd-logind[1946]: New session 6 of user core. Jan 23 01:10:45.335628 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:10:45.594838 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:10:45.595341 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:45.600888 sudo[2309]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:45.607207 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:10:45.607599 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:45.618553 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:45.659904 augenrules[2331]: No rules Jan 23 01:10:45.661420 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:45.661724 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:45.663694 sudo[2308]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:45.740321 sshd[2307]: Connection closed by 68.220.241.50 port 44296 Jan 23 01:10:45.741430 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:45.745679 systemd[1]: sshd@5-172.31.20.240:22-68.220.241.50:44296.service: Deactivated successfully. Jan 23 01:10:45.747533 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:10:45.748265 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:10:45.750163 systemd-logind[1946]: Removed session 6. Jan 23 01:10:45.828469 systemd[1]: Started sshd@6-172.31.20.240:22-68.220.241.50:44298.service - OpenSSH per-connection server daemon (68.220.241.50:44298). Jan 23 01:10:46.330715 sshd[2340]: Accepted publickey for core from 68.220.241.50 port 44298 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:46.331329 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:46.337487 systemd-logind[1946]: New session 7 of user core. Jan 23 01:10:46.347499 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:10:46.609414 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:10:46.609803 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:47.608202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:47.608461 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 267.3M memory peak. Jan 23 01:10:47.611850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:47.654901 systemd[1]: Reload requested from client PID 2377 ('systemctl') (unit session-7.scope)... Jan 23 01:10:47.654919 systemd[1]: Reloading... Jan 23 01:10:47.795256 zram_generator::config[2421]: No configuration found. Jan 23 01:10:48.075697 systemd[1]: Reloading finished in 420 ms. Jan 23 01:10:48.130772 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:10:48.130854 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:10:48.131279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:48.131330 systemd[1]: kubelet.service: Consumed 153ms CPU time, 98.3M memory peak. Jan 23 01:10:48.133197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:48.512022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:48.522664 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:48.570277 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:48.570277 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:48.570277 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:48.570277 kubelet[2484]: I0123 01:10:48.569627 2484 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:49.153050 kubelet[2484]: I0123 01:10:49.152998 2484 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:10:49.153050 kubelet[2484]: I0123 01:10:49.153037 2484 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:49.153483 kubelet[2484]: I0123 01:10:49.153456 2484 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:10:49.191050 kubelet[2484]: I0123 01:10:49.190993 2484 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:49.204902 kubelet[2484]: I0123 01:10:49.204855 2484 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:49.207437 kubelet[2484]: I0123 01:10:49.207410 2484 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:10:49.208577 kubelet[2484]: I0123 01:10:49.208527 2484 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:49.208887 kubelet[2484]: I0123 01:10:49.208576 2484 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.20.240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:49.208887 kubelet[2484]: I0123 01:10:49.208878 2484 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:49.208887 kubelet[2484]: I0123 01:10:49.208895 2484 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:10:49.209063 kubelet[2484]: I0123 01:10:49.209016 2484 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:49.214299 kubelet[2484]: I0123 01:10:49.214269 2484 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:10:49.214299 kubelet[2484]: I0123 01:10:49.214305 2484 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:49.214416 kubelet[2484]: I0123 01:10:49.214350 2484 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:10:49.214416 kubelet[2484]: I0123 01:10:49.214362 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:49.219718 kubelet[2484]: E0123 01:10:49.218851 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:49.219718 kubelet[2484]: E0123 01:10:49.218886 2484 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:49.219718 kubelet[2484]: I0123 01:10:49.219218 2484 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:49.219718 kubelet[2484]: I0123 01:10:49.219615 2484 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:10:49.220623 kubelet[2484]: W0123 01:10:49.220575 2484 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:10:49.223392 kubelet[2484]: I0123 01:10:49.223068 2484 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:10:49.223392 kubelet[2484]: I0123 01:10:49.223103 2484 server.go:1287] "Started kubelet" Jan 23 01:10:49.225320 kubelet[2484]: I0123 01:10:49.225280 2484 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:49.226658 kubelet[2484]: I0123 01:10:49.226150 2484 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:10:49.228350 kubelet[2484]: I0123 01:10:49.228309 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:49.231085 kubelet[2484]: I0123 01:10:49.230995 2484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:49.231617 kubelet[2484]: I0123 01:10:49.231563 2484 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:49.233106 kubelet[2484]: W0123 01:10:49.233076 2484 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.20.240" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 23 01:10:49.233177 kubelet[2484]: E0123 01:10:49.233122 2484 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.20.240\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 01:10:49.234411 kubelet[2484]: W0123 01:10:49.234373 2484 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 23 01:10:49.234498 kubelet[2484]: E0123 01:10:49.234410 2484 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 23 01:10:49.239460 kubelet[2484]: E0123 01:10:49.237674 2484 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.240.188d36f3c432b1e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.240,UID:172.31.20.240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.240,},FirstTimestamp:2026-01-23 01:10:49.223082471 +0000 UTC m=+0.694454792,LastTimestamp:2026-01-23 01:10:49.223082471 +0000 UTC m=+0.694454792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.240,}" Jan 23 01:10:49.239944 kubelet[2484]: I0123 01:10:49.239841 2484 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:49.242721 kubelet[2484]: E0123 01:10:49.242650 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.242721 kubelet[2484]: I0123 01:10:49.242683 2484 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:10:49.242950 kubelet[2484]: I0123 01:10:49.242912 2484 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:10:49.243048 kubelet[2484]: I0123 01:10:49.242962 2484 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:10:49.244467 kubelet[2484]: E0123 01:10:49.244441 2484 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:49.245810 kubelet[2484]: I0123 01:10:49.245385 2484 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:10:49.245810 kubelet[2484]: I0123 01:10:49.245511 2484 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:49.248397 kubelet[2484]: I0123 01:10:49.248356 2484 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:10:49.270248 kubelet[2484]: I0123 01:10:49.269523 2484 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:49.270248 kubelet[2484]: I0123 01:10:49.269545 2484 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:49.270248 kubelet[2484]: I0123 01:10:49.269568 2484 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:49.275675 kubelet[2484]: W0123 01:10:49.275103 2484 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 23 01:10:49.275675 kubelet[2484]: E0123 01:10:49.275145 2484 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 23 01:10:49.276018 kubelet[2484]: I0123 01:10:49.276000 2484 policy_none.go:49] "None policy: Start" Jan 23 01:10:49.276082 kubelet[2484]: I0123 01:10:49.276035 2484 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:10:49.276082 kubelet[2484]: I0123 01:10:49.276050 2484 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:10:49.276622 kubelet[2484]: E0123 01:10:49.276510 2484 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.240.188d36f3c57868eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.240,UID:172.31.20.240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.240,},FirstTimestamp:2026-01-23 01:10:49.244428523 +0000 UTC m=+0.715800860,LastTimestamp:2026-01-23 01:10:49.244428523 +0000 UTC m=+0.715800860,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.240,}" Jan 23 01:10:49.280372 kubelet[2484]: E0123 01:10:49.279275 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.240\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 23 01:10:49.289941 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:10:49.317090 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:10:49.320504 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:10:49.331165 kubelet[2484]: I0123 01:10:49.331142 2484 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:10:49.331592 kubelet[2484]: I0123 01:10:49.331533 2484 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:49.331592 kubelet[2484]: I0123 01:10:49.331548 2484 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:49.332480 kubelet[2484]: I0123 01:10:49.332403 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:49.333055 kubelet[2484]: E0123 01:10:49.332875 2484 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:49.333055 kubelet[2484]: E0123 01:10:49.332913 2484 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.240\" not found" Jan 23 01:10:49.355467 kubelet[2484]: I0123 01:10:49.355404 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:49.357391 kubelet[2484]: I0123 01:10:49.357309 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:49.357391 kubelet[2484]: I0123 01:10:49.357354 2484 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:10:49.357391 kubelet[2484]: I0123 01:10:49.357381 2484 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:49.357391 kubelet[2484]: I0123 01:10:49.357387 2484 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:10:49.357566 kubelet[2484]: E0123 01:10:49.357448 2484 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 01:10:49.434593 kubelet[2484]: I0123 01:10:49.434479 2484 kubelet_node_status.go:75] "Attempting to register node" node="172.31.20.240" Jan 23 01:10:49.443589 kubelet[2484]: I0123 01:10:49.443558 2484 kubelet_node_status.go:78] "Successfully registered node" node="172.31.20.240" Jan 23 01:10:49.443589 kubelet[2484]: E0123 01:10:49.443590 2484 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.20.240\": node \"172.31.20.240\" not found" Jan 23 01:10:49.475167 kubelet[2484]: E0123 01:10:49.475122 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.563338 sudo[2344]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:49.576197 kubelet[2484]: E0123 01:10:49.576094 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.640725 sshd[2343]: Connection closed by 68.220.241.50 port 44298 Jan 23 01:10:49.641293 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:49.645393 systemd[1]: sshd@6-172.31.20.240:22-68.220.241.50:44298.service: Deactivated successfully. Jan 23 01:10:49.647494 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:10:49.647672 systemd[1]: session-7.scope: Consumed 495ms CPU time, 74.7M memory peak. Jan 23 01:10:49.648878 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:10:49.650106 systemd-logind[1946]: Removed session 7. Jan 23 01:10:49.677126 kubelet[2484]: E0123 01:10:49.677018 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.777316 kubelet[2484]: E0123 01:10:49.777191 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.878216 kubelet[2484]: E0123 01:10:49.878171 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:49.979058 kubelet[2484]: E0123 01:10:49.978993 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:50.080131 kubelet[2484]: E0123 01:10:50.080009 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:50.155925 kubelet[2484]: I0123 01:10:50.155862 2484 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 01:10:50.156072 kubelet[2484]: W0123 01:10:50.156045 2484 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 01:10:50.181208 kubelet[2484]: E0123 01:10:50.181132 2484 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.20.240\" not found" Jan 23 01:10:50.218331 kubelet[2484]: I0123 01:10:50.218292 2484 apiserver.go:52] "Watching apiserver" Jan 23 01:10:50.219405 kubelet[2484]: E0123 01:10:50.219371 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:50.222656 kubelet[2484]: E0123 01:10:50.222618 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:10:50.234359 systemd[1]: Created slice kubepods-besteffort-pod56611bb2_5306_414c_ac78_bccf3036ca90.slice - libcontainer container kubepods-besteffort-pod56611bb2_5306_414c_ac78_bccf3036ca90.slice. Jan 23 01:10:50.244151 kubelet[2484]: I0123 01:10:50.244123 2484 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:10:50.246398 systemd[1]: Created slice kubepods-besteffort-pod43393ae7_96fb_40cb_a2cc_ab33e2cc333a.slice - libcontainer container kubepods-besteffort-pod43393ae7_96fb_40cb_a2cc_ab33e2cc333a.slice. Jan 23 01:10:50.249131 kubelet[2484]: I0123 01:10:50.249089 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-lib-modules\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249273 kubelet[2484]: I0123 01:10:50.249133 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-node-certs\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249273 kubelet[2484]: I0123 01:10:50.249156 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-policysync\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249273 kubelet[2484]: I0123 01:10:50.249180 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-var-run-calico\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249273 kubelet[2484]: I0123 01:10:50.249211 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-xtables-lock\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249273 kubelet[2484]: I0123 01:10:50.249257 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwjn\" (UniqueName: \"kubernetes.io/projected/56611bb2-5306-414c-ac78-bccf3036ca90-kube-api-access-6bwjn\") pod \"kube-proxy-mrj7g\" (UID: \"56611bb2-5306-414c-ac78-bccf3036ca90\") " pod="kube-system/kube-proxy-mrj7g" Jan 23 01:10:50.249493 kubelet[2484]: I0123 01:10:50.249281 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-cni-bin-dir\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249493 kubelet[2484]: I0123 01:10:50.249310 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-var-lib-calico\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249493 kubelet[2484]: I0123 01:10:50.249337 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4268a9df-0451-4ff6-8f73-e9f18c886e93-varrun\") pod \"csi-node-driver-wtjjp\" (UID: \"4268a9df-0451-4ff6-8f73-e9f18c886e93\") " pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:10:50.249493 kubelet[2484]: I0123 01:10:50.249361 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bt8\" (UniqueName: \"kubernetes.io/projected/4268a9df-0451-4ff6-8f73-e9f18c886e93-kube-api-access-72bt8\") pod \"csi-node-driver-wtjjp\" (UID: \"4268a9df-0451-4ff6-8f73-e9f18c886e93\") " pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:10:50.249493 kubelet[2484]: I0123 01:10:50.249387 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56611bb2-5306-414c-ac78-bccf3036ca90-xtables-lock\") pod \"kube-proxy-mrj7g\" (UID: \"56611bb2-5306-414c-ac78-bccf3036ca90\") " pod="kube-system/kube-proxy-mrj7g" Jan 23 01:10:50.249694 kubelet[2484]: I0123 01:10:50.249412 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-cni-log-dir\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249694 kubelet[2484]: I0123 01:10:50.249444 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-cni-net-dir\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249694 kubelet[2484]: I0123 01:10:50.249467 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-flexvol-driver-host\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249694 kubelet[2484]: I0123 01:10:50.249491 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2j74\" (UniqueName: \"kubernetes.io/projected/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-kube-api-access-q2j74\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249694 kubelet[2484]: I0123 01:10:50.249513 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4268a9df-0451-4ff6-8f73-e9f18c886e93-registration-dir\") pod \"csi-node-driver-wtjjp\" (UID: \"4268a9df-0451-4ff6-8f73-e9f18c886e93\") " pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:10:50.249897 kubelet[2484]: I0123 01:10:50.249547 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43393ae7-96fb-40cb-a2cc-ab33e2cc333a-tigera-ca-bundle\") pod \"calico-node-rqqws\" (UID: \"43393ae7-96fb-40cb-a2cc-ab33e2cc333a\") " pod="calico-system/calico-node-rqqws" Jan 23 01:10:50.249897 kubelet[2484]: I0123 01:10:50.249572 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4268a9df-0451-4ff6-8f73-e9f18c886e93-kubelet-dir\") pod \"csi-node-driver-wtjjp\" (UID: \"4268a9df-0451-4ff6-8f73-e9f18c886e93\") " pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:10:50.249897 kubelet[2484]: I0123 01:10:50.249596 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4268a9df-0451-4ff6-8f73-e9f18c886e93-socket-dir\") pod \"csi-node-driver-wtjjp\" (UID: \"4268a9df-0451-4ff6-8f73-e9f18c886e93\") " pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:10:50.249897 kubelet[2484]: I0123 01:10:50.249622 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56611bb2-5306-414c-ac78-bccf3036ca90-kube-proxy\") pod \"kube-proxy-mrj7g\" (UID: \"56611bb2-5306-414c-ac78-bccf3036ca90\") " pod="kube-system/kube-proxy-mrj7g" Jan 23 01:10:50.249897 kubelet[2484]: I0123 01:10:50.249646 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56611bb2-5306-414c-ac78-bccf3036ca90-lib-modules\") pod \"kube-proxy-mrj7g\" (UID: \"56611bb2-5306-414c-ac78-bccf3036ca90\") " pod="kube-system/kube-proxy-mrj7g" Jan 23 01:10:50.282345 kubelet[2484]: I0123 01:10:50.282314 2484 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 01:10:50.282635 containerd[1977]: time="2026-01-23T01:10:50.282601044Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:10:50.283059 kubelet[2484]: I0123 01:10:50.282843 2484 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 01:10:50.353073 kubelet[2484]: E0123 01:10:50.352214 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.353073 kubelet[2484]: W0123 01:10:50.352245 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.353073 kubelet[2484]: E0123 01:10:50.352269 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.353446 kubelet[2484]: E0123 01:10:50.353339 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.353446 kubelet[2484]: W0123 01:10:50.353352 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.353446 kubelet[2484]: E0123 01:10:50.353365 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.353799 kubelet[2484]: E0123 01:10:50.353640 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.353799 kubelet[2484]: W0123 01:10:50.353648 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.353799 kubelet[2484]: E0123 01:10:50.353657 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.353992 kubelet[2484]: E0123 01:10:50.353983 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.354065 kubelet[2484]: W0123 01:10:50.354056 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.354114 kubelet[2484]: E0123 01:10:50.354106 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.354390 kubelet[2484]: E0123 01:10:50.354320 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.354390 kubelet[2484]: W0123 01:10:50.354329 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.354390 kubelet[2484]: E0123 01:10:50.354338 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.354674 kubelet[2484]: E0123 01:10:50.354652 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.354674 kubelet[2484]: W0123 01:10:50.354661 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.355007 kubelet[2484]: E0123 01:10:50.354920 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.355007 kubelet[2484]: W0123 01:10:50.354936 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.355007 kubelet[2484]: E0123 01:10:50.354946 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.355231 kubelet[2484]: E0123 01:10:50.355176 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.355231 kubelet[2484]: W0123 01:10:50.355184 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.355231 kubelet[2484]: E0123 01:10:50.355193 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.355511 kubelet[2484]: E0123 01:10:50.355458 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.355511 kubelet[2484]: W0123 01:10:50.355467 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.355511 kubelet[2484]: E0123 01:10:50.355475 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.355707 kubelet[2484]: E0123 01:10:50.355690 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.355861 kubelet[2484]: E0123 01:10:50.355842 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.355861 kubelet[2484]: W0123 01:10:50.355850 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.356001 kubelet[2484]: E0123 01:10:50.355926 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.356314 kubelet[2484]: E0123 01:10:50.356303 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.356452 kubelet[2484]: W0123 01:10:50.356386 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.356452 kubelet[2484]: E0123 01:10:50.356402 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.356977 kubelet[2484]: E0123 01:10:50.356966 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.357100 kubelet[2484]: W0123 01:10:50.357052 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.357100 kubelet[2484]: E0123 01:10:50.357078 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.357502 kubelet[2484]: E0123 01:10:50.357491 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.357615 kubelet[2484]: W0123 01:10:50.357560 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.357615 kubelet[2484]: E0123 01:10:50.357573 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.359277 kubelet[2484]: E0123 01:10:50.359208 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.359277 kubelet[2484]: W0123 01:10:50.359237 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.359277 kubelet[2484]: E0123 01:10:50.359249 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.374662 kubelet[2484]: E0123 01:10:50.370667 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.374662 kubelet[2484]: W0123 01:10:50.370690 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.374662 kubelet[2484]: E0123 01:10:50.371277 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.374662 kubelet[2484]: E0123 01:10:50.371595 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.374662 kubelet[2484]: W0123 01:10:50.371605 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.374662 kubelet[2484]: E0123 01:10:50.371619 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.379060 kubelet[2484]: E0123 01:10:50.379034 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:50.379060 kubelet[2484]: W0123 01:10:50.379055 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:50.379199 kubelet[2484]: E0123 01:10:50.379074 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:50.544145 containerd[1977]: time="2026-01-23T01:10:50.544100070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrj7g,Uid:56611bb2-5306-414c-ac78-bccf3036ca90,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:50.549828 containerd[1977]: time="2026-01-23T01:10:50.549781781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqqws,Uid:43393ae7-96fb-40cb-a2cc-ab33e2cc333a,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:51.129018 containerd[1977]: time="2026-01-23T01:10:51.128963381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:51.133078 containerd[1977]: time="2026-01-23T01:10:51.133032588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:10:51.134902 containerd[1977]: time="2026-01-23T01:10:51.134861293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:51.136948 containerd[1977]: time="2026-01-23T01:10:51.136857218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:51.138850 containerd[1977]: time="2026-01-23T01:10:51.138815572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:10:51.141702 containerd[1977]: time="2026-01-23T01:10:51.141344416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:51.142018 containerd[1977]: time="2026-01-23T01:10:51.141987135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.54957ms" Jan 23 01:10:51.143328 containerd[1977]: time="2026-01-23T01:10:51.143289987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 583.898928ms" Jan 23 01:10:51.184900 containerd[1977]: time="2026-01-23T01:10:51.184797192Z" level=info msg="connecting to shim 9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59" address="unix:///run/containerd/s/9bc5a4b1b2fe3b8c6d76df8c48f14f297e2a0b2242ef6fdec004a09794712a33" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:51.185631 containerd[1977]: time="2026-01-23T01:10:51.185601658Z" level=info msg="connecting to shim 5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e" address="unix:///run/containerd/s/63bde13d2995805b1f086e148ce2396fd068e83d13d787fd96b93e8e4a1dc354" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:51.216486 systemd[1]: Started cri-containerd-5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e.scope - libcontainer container 5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e. Jan 23 01:10:51.220318 kubelet[2484]: E0123 01:10:51.220275 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:51.222448 systemd[1]: Started cri-containerd-9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59.scope - libcontainer container 9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59. Jan 23 01:10:51.268406 containerd[1977]: time="2026-01-23T01:10:51.268357835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrj7g,Uid:56611bb2-5306-414c-ac78-bccf3036ca90,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e\"" Jan 23 01:10:51.273174 containerd[1977]: time="2026-01-23T01:10:51.273136272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:10:51.279438 containerd[1977]: time="2026-01-23T01:10:51.279379973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rqqws,Uid:43393ae7-96fb-40cb-a2cc-ab33e2cc333a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\"" Jan 23 01:10:51.363851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544741371.mount: Deactivated successfully. Jan 23 01:10:52.221035 kubelet[2484]: E0123 01:10:52.220899 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:52.358196 kubelet[2484]: E0123 01:10:52.358153 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:10:52.414476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436324877.mount: Deactivated successfully. Jan 23 01:10:52.917718 containerd[1977]: time="2026-01-23T01:10:52.917664688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:52.918790 containerd[1977]: time="2026-01-23T01:10:52.918662118Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 23 01:10:52.920003 containerd[1977]: time="2026-01-23T01:10:52.919964946Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:52.923173 containerd[1977]: time="2026-01-23T01:10:52.922645692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:52.923173 containerd[1977]: time="2026-01-23T01:10:52.923025728Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.649524832s" Jan 23 01:10:52.923173 containerd[1977]: time="2026-01-23T01:10:52.923055448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:10:52.924455 containerd[1977]: time="2026-01-23T01:10:52.924427876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:10:52.925863 containerd[1977]: time="2026-01-23T01:10:52.925832963Z" level=info msg="CreateContainer within sandbox \"5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:10:52.938830 containerd[1977]: time="2026-01-23T01:10:52.938788745Z" level=info msg="Container a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:52.955510 containerd[1977]: time="2026-01-23T01:10:52.955452007Z" level=info msg="CreateContainer within sandbox \"5e562236d11c640105316cbd726f08eb713541ca31b717e02c18d95927d1c73e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d\"" Jan 23 01:10:52.956271 containerd[1977]: time="2026-01-23T01:10:52.956240377Z" level=info msg="StartContainer for \"a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d\"" Jan 23 01:10:52.957673 containerd[1977]: time="2026-01-23T01:10:52.957638133Z" level=info msg="connecting to shim a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d" address="unix:///run/containerd/s/63bde13d2995805b1f086e148ce2396fd068e83d13d787fd96b93e8e4a1dc354" protocol=ttrpc version=3 Jan 23 01:10:52.982446 systemd[1]: Started cri-containerd-a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d.scope - libcontainer container a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d. Jan 23 01:10:53.059079 containerd[1977]: time="2026-01-23T01:10:53.059022326Z" level=info msg="StartContainer for \"a745341c5cfb6fb52949a1041b325400aeb735586e42ae9bf110b1dd9280cc2d\" returns successfully" Jan 23 01:10:53.222019 kubelet[2484]: E0123 01:10:53.221878 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:53.390798 kubelet[2484]: I0123 01:10:53.390714 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mrj7g" podStartSLOduration=2.738782407 podStartE2EDuration="4.390692658s" podCreationTimestamp="2026-01-23 01:10:49 +0000 UTC" firstStartedPulling="2026-01-23 01:10:51.272198107 +0000 UTC m=+2.743570429" lastFinishedPulling="2026-01-23 01:10:52.924108369 +0000 UTC m=+4.395480680" observedRunningTime="2026-01-23 01:10:53.390360691 +0000 UTC m=+4.861733023" watchObservedRunningTime="2026-01-23 01:10:53.390692658 +0000 UTC m=+4.862064989" Jan 23 01:10:53.465593 kubelet[2484]: E0123 01:10:53.465557 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.465593 kubelet[2484]: W0123 01:10:53.465587 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.465876 kubelet[2484]: E0123 01:10:53.465612 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.466013 kubelet[2484]: E0123 01:10:53.465970 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.466013 kubelet[2484]: W0123 01:10:53.465999 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.466149 kubelet[2484]: E0123 01:10:53.466016 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.467550 kubelet[2484]: E0123 01:10:53.467518 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.467550 kubelet[2484]: W0123 01:10:53.467539 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.467706 kubelet[2484]: E0123 01:10:53.467554 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.467850 kubelet[2484]: E0123 01:10:53.467821 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.467850 kubelet[2484]: W0123 01:10:53.467839 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.467953 kubelet[2484]: E0123 01:10:53.467853 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.468095 kubelet[2484]: E0123 01:10:53.468068 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.468095 kubelet[2484]: W0123 01:10:53.468084 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.468205 kubelet[2484]: E0123 01:10:53.468096 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.468313 kubelet[2484]: E0123 01:10:53.468286 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.468313 kubelet[2484]: W0123 01:10:53.468296 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.468313 kubelet[2484]: E0123 01:10:53.468308 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.468515 kubelet[2484]: E0123 01:10:53.468488 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.468515 kubelet[2484]: W0123 01:10:53.468498 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.468515 kubelet[2484]: E0123 01:10:53.468509 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.468784 kubelet[2484]: E0123 01:10:53.468697 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.468784 kubelet[2484]: W0123 01:10:53.468707 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.468784 kubelet[2484]: E0123 01:10:53.468718 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.469492 kubelet[2484]: E0123 01:10:53.469471 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.469492 kubelet[2484]: W0123 01:10:53.469490 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.469633 kubelet[2484]: E0123 01:10:53.469504 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.469733 kubelet[2484]: E0123 01:10:53.469707 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.469733 kubelet[2484]: W0123 01:10:53.469731 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.469828 kubelet[2484]: E0123 01:10:53.469744 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.470402 kubelet[2484]: E0123 01:10:53.470263 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.470402 kubelet[2484]: W0123 01:10:53.470277 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.470402 kubelet[2484]: E0123 01:10:53.470291 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.470572 kubelet[2484]: E0123 01:10:53.470489 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.470572 kubelet[2484]: W0123 01:10:53.470498 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.470572 kubelet[2484]: E0123 01:10:53.470510 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.470711 kubelet[2484]: E0123 01:10:53.470691 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.470711 kubelet[2484]: W0123 01:10:53.470700 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.470820 kubelet[2484]: E0123 01:10:53.470710 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.470886 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.471925 kubelet[2484]: W0123 01:10:53.470895 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.470906 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.471195 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.471925 kubelet[2484]: W0123 01:10:53.471237 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.471287 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.471536 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.471925 kubelet[2484]: W0123 01:10:53.471547 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.471558 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.471925 kubelet[2484]: E0123 01:10:53.471765 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.472427 kubelet[2484]: W0123 01:10:53.471774 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.472427 kubelet[2484]: E0123 01:10:53.471840 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.472427 kubelet[2484]: E0123 01:10:53.472030 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.472427 kubelet[2484]: W0123 01:10:53.472038 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.472427 kubelet[2484]: E0123 01:10:53.472052 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.472427 kubelet[2484]: E0123 01:10:53.472274 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.472427 kubelet[2484]: W0123 01:10:53.472285 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.472427 kubelet[2484]: E0123 01:10:53.472296 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.473349 kubelet[2484]: E0123 01:10:53.473324 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.473349 kubelet[2484]: W0123 01:10:53.473347 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.473466 kubelet[2484]: E0123 01:10:53.473362 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.473622 kubelet[2484]: E0123 01:10:53.473605 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.473672 kubelet[2484]: W0123 01:10:53.473624 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.473672 kubelet[2484]: E0123 01:10:53.473638 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.474049 kubelet[2484]: E0123 01:10:53.474025 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.474049 kubelet[2484]: W0123 01:10:53.474043 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.474185 kubelet[2484]: E0123 01:10:53.474072 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.475292 kubelet[2484]: E0123 01:10:53.474760 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.475292 kubelet[2484]: W0123 01:10:53.474777 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.475292 kubelet[2484]: E0123 01:10:53.474940 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.475575 kubelet[2484]: E0123 01:10:53.475554 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.475627 kubelet[2484]: W0123 01:10:53.475574 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.475669 kubelet[2484]: E0123 01:10:53.475638 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.476546 kubelet[2484]: E0123 01:10:53.476524 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.476546 kubelet[2484]: W0123 01:10:53.476544 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.476776 kubelet[2484]: E0123 01:10:53.476755 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.477293 kubelet[2484]: E0123 01:10:53.477215 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.477369 kubelet[2484]: W0123 01:10:53.477295 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.477369 kubelet[2484]: E0123 01:10:53.477316 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.477940 kubelet[2484]: E0123 01:10:53.477919 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.477940 kubelet[2484]: W0123 01:10:53.477939 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.478116 kubelet[2484]: E0123 01:10:53.478095 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.478760 kubelet[2484]: E0123 01:10:53.478740 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.478760 kubelet[2484]: W0123 01:10:53.478759 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.479293 kubelet[2484]: E0123 01:10:53.479270 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.479487 kubelet[2484]: E0123 01:10:53.479469 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.479543 kubelet[2484]: W0123 01:10:53.479489 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.479592 kubelet[2484]: E0123 01:10:53.479559 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.480073 kubelet[2484]: E0123 01:10:53.480053 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.480073 kubelet[2484]: W0123 01:10:53.480070 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.480179 kubelet[2484]: E0123 01:10:53.480101 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.480603 kubelet[2484]: E0123 01:10:53.480584 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.480603 kubelet[2484]: W0123 01:10:53.480603 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.480829 kubelet[2484]: E0123 01:10:53.480633 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:53.481288 kubelet[2484]: E0123 01:10:53.480938 2484 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:53.481288 kubelet[2484]: W0123 01:10:53.480951 2484 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:53.481288 kubelet[2484]: E0123 01:10:53.480964 2484 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:54.088157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660953790.mount: Deactivated successfully. Jan 23 01:10:54.179215 containerd[1977]: time="2026-01-23T01:10:54.179179662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.180999 containerd[1977]: time="2026-01-23T01:10:54.180726493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 01:10:54.183077 containerd[1977]: time="2026-01-23T01:10:54.183034936Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.187481 containerd[1977]: time="2026-01-23T01:10:54.187442646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:54.189517 containerd[1977]: time="2026-01-23T01:10:54.189481209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.265022298s" Jan 23 01:10:54.189517 containerd[1977]: time="2026-01-23T01:10:54.189517910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:10:54.192019 containerd[1977]: time="2026-01-23T01:10:54.191979605Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:10:54.207258 containerd[1977]: time="2026-01-23T01:10:54.205723653Z" level=info msg="Container 1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:54.219626 containerd[1977]: time="2026-01-23T01:10:54.219570642Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b\"" Jan 23 01:10:54.220240 containerd[1977]: time="2026-01-23T01:10:54.220151228Z" level=info msg="StartContainer for \"1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b\"" Jan 23 01:10:54.221589 containerd[1977]: time="2026-01-23T01:10:54.221556047Z" level=info msg="connecting to shim 1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b" address="unix:///run/containerd/s/9bc5a4b1b2fe3b8c6d76df8c48f14f297e2a0b2242ef6fdec004a09794712a33" protocol=ttrpc version=3 Jan 23 01:10:54.222424 kubelet[2484]: E0123 01:10:54.222365 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:54.252516 systemd[1]: Started cri-containerd-1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b.scope - libcontainer container 1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b. Jan 23 01:10:54.326987 containerd[1977]: time="2026-01-23T01:10:54.326922284Z" level=info msg="StartContainer for \"1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b\" returns successfully" Jan 23 01:10:54.335655 systemd[1]: cri-containerd-1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b.scope: Deactivated successfully. Jan 23 01:10:54.341030 containerd[1977]: time="2026-01-23T01:10:54.340866358Z" level=info msg="received container exit event container_id:\"1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b\" id:\"1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b\" pid:2867 exited_at:{seconds:1769130654 nanos:340274644}" Jan 23 01:10:54.358191 kubelet[2484]: E0123 01:10:54.358142 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:10:55.051975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d854325073211204ae6ac0e173732b2b1d6c8412beed37e86aae1f7d57f5d1b-rootfs.mount: Deactivated successfully. Jan 23 01:10:55.222912 kubelet[2484]: E0123 01:10:55.222837 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:55.387937 containerd[1977]: time="2026-01-23T01:10:55.387559667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:10:56.223348 kubelet[2484]: E0123 01:10:56.223304 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:56.358188 kubelet[2484]: E0123 01:10:56.358099 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:10:57.223890 kubelet[2484]: E0123 01:10:57.223823 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:58.224675 kubelet[2484]: E0123 01:10:58.224637 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:58.359114 kubelet[2484]: E0123 01:10:58.359064 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:10:58.376875 containerd[1977]: time="2026-01-23T01:10:58.376813269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.382633 containerd[1977]: time="2026-01-23T01:10:58.382586322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:10:58.386558 containerd[1977]: time="2026-01-23T01:10:58.386506710Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.393780 containerd[1977]: time="2026-01-23T01:10:58.393731191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.394642 containerd[1977]: time="2026-01-23T01:10:58.394569588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.006948748s" Jan 23 01:10:58.394642 containerd[1977]: time="2026-01-23T01:10:58.394603434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:10:58.397416 containerd[1977]: time="2026-01-23T01:10:58.397381644Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:10:58.420208 containerd[1977]: time="2026-01-23T01:10:58.420158227Z" level=info msg="Container 8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:58.437682 containerd[1977]: time="2026-01-23T01:10:58.437631676Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df\"" Jan 23 01:10:58.438519 containerd[1977]: time="2026-01-23T01:10:58.438452738Z" level=info msg="StartContainer for \"8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df\"" Jan 23 01:10:58.440102 containerd[1977]: time="2026-01-23T01:10:58.440057239Z" level=info msg="connecting to shim 8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df" address="unix:///run/containerd/s/9bc5a4b1b2fe3b8c6d76df8c48f14f297e2a0b2242ef6fdec004a09794712a33" protocol=ttrpc version=3 Jan 23 01:10:58.475442 systemd[1]: Started cri-containerd-8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df.scope - libcontainer container 8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df. Jan 23 01:10:58.549378 containerd[1977]: time="2026-01-23T01:10:58.549314535Z" level=info msg="StartContainer for \"8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df\" returns successfully" Jan 23 01:10:59.226166 kubelet[2484]: E0123 01:10:59.226115 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:59.268305 containerd[1977]: time="2026-01-23T01:10:59.268162852Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:10:59.270685 systemd[1]: cri-containerd-8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df.scope: Deactivated successfully. Jan 23 01:10:59.271017 systemd[1]: cri-containerd-8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df.scope: Consumed 554ms CPU time, 192M memory peak, 171.3M written to disk. Jan 23 01:10:59.274789 containerd[1977]: time="2026-01-23T01:10:59.274621211Z" level=info msg="received container exit event container_id:\"8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df\" id:\"8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df\" pid:2922 exited_at:{seconds:1769130659 nanos:274096695}" Jan 23 01:10:59.298006 kubelet[2484]: I0123 01:10:59.297802 2484 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:10:59.304552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e060bfe5625cc871d4f2f751fd190fdade6e6a7c4b800cbf1b72495faf634df-rootfs.mount: Deactivated successfully. Jan 23 01:11:00.226891 kubelet[2484]: E0123 01:11:00.226831 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:00.364777 systemd[1]: Created slice kubepods-besteffort-pod4268a9df_0451_4ff6_8f73_e9f18c886e93.slice - libcontainer container kubepods-besteffort-pod4268a9df_0451_4ff6_8f73_e9f18c886e93.slice. Jan 23 01:11:00.367705 containerd[1977]: time="2026-01-23T01:11:00.367665431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtjjp,Uid:4268a9df-0451-4ff6-8f73-e9f18c886e93,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:00.407701 containerd[1977]: time="2026-01-23T01:11:00.407592764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:11:00.431598 containerd[1977]: time="2026-01-23T01:11:00.431540136Z" level=error msg="Failed to destroy network for sandbox \"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:00.436079 systemd[1]: run-netns-cni\x2d380453d9\x2d6db8\x2d9c46\x2d7001\x2d8557cf02c10a.mount: Deactivated successfully. Jan 23 01:11:00.437157 containerd[1977]: time="2026-01-23T01:11:00.437008718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtjjp,Uid:4268a9df-0451-4ff6-8f73-e9f18c886e93,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:00.437500 kubelet[2484]: E0123 01:11:00.437445 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:00.437593 kubelet[2484]: E0123 01:11:00.437534 2484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:11:00.437593 kubelet[2484]: E0123 01:11:00.437564 2484 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wtjjp" Jan 23 01:11:00.437898 kubelet[2484]: E0123 01:11:00.437638 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c416ec2c21bb32a0064b33d1ea3db74dd5e0b41d51397bb62e9cc7aefb66789\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:01.227890 kubelet[2484]: E0123 01:11:01.227842 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:02.228694 kubelet[2484]: E0123 01:11:02.228634 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:03.230055 kubelet[2484]: E0123 01:11:03.229357 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:04.232124 kubelet[2484]: E0123 01:11:04.232072 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:04.789150 systemd[1]: Created slice kubepods-besteffort-pod79850a2c_055d_47bd_a7df_ca05bf7f9b5e.slice - libcontainer container kubepods-besteffort-pod79850a2c_055d_47bd_a7df_ca05bf7f9b5e.slice. Jan 23 01:11:04.958247 kubelet[2484]: I0123 01:11:04.957493 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9sz\" (UniqueName: \"kubernetes.io/projected/79850a2c-055d-47bd-a7df-ca05bf7f9b5e-kube-api-access-6r9sz\") pod \"nginx-deployment-7fcdb87857-7wzn6\" (UID: \"79850a2c-055d-47bd-a7df-ca05bf7f9b5e\") " pod="default/nginx-deployment-7fcdb87857-7wzn6" Jan 23 01:11:05.095284 containerd[1977]: time="2026-01-23T01:11:05.095158722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7wzn6,Uid:79850a2c-055d-47bd-a7df-ca05bf7f9b5e,Namespace:default,Attempt:0,}" Jan 23 01:11:05.232587 kubelet[2484]: E0123 01:11:05.232531 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:05.235195 containerd[1977]: time="2026-01-23T01:11:05.235050565Z" level=error msg="Failed to destroy network for sandbox \"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:05.236785 containerd[1977]: time="2026-01-23T01:11:05.236703243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7wzn6,Uid:79850a2c-055d-47bd-a7df-ca05bf7f9b5e,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:05.237686 systemd[1]: run-netns-cni\x2d87030a79\x2d7296\x2dd743\x2db0d9\x2dfd5e3f8d7c12.mount: Deactivated successfully. Jan 23 01:11:05.239184 kubelet[2484]: E0123 01:11:05.238414 2484 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:11:05.239184 kubelet[2484]: E0123 01:11:05.238484 2484 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-7wzn6" Jan 23 01:11:05.239184 kubelet[2484]: E0123 01:11:05.238512 2484 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-7wzn6" Jan 23 01:11:05.239833 kubelet[2484]: E0123 01:11:05.238573 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-7wzn6_default(79850a2c-055d-47bd-a7df-ca05bf7f9b5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-7wzn6_default(79850a2c-055d-47bd-a7df-ca05bf7f9b5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a98543820dea7a401c6e5df85cc329ac01c7759ae1efb93f0204425c41bcaa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-7wzn6" podUID="79850a2c-055d-47bd-a7df-ca05bf7f9b5e" Jan 23 01:11:06.233093 kubelet[2484]: E0123 01:11:06.233049 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:06.876599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207491815.mount: Deactivated successfully. Jan 23 01:11:06.928846 containerd[1977]: time="2026-01-23T01:11:06.928789723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:06.932845 containerd[1977]: time="2026-01-23T01:11:06.932515485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:11:06.935156 containerd[1977]: time="2026-01-23T01:11:06.935112281Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:06.938400 containerd[1977]: time="2026-01-23T01:11:06.938357653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:06.939130 containerd[1977]: time="2026-01-23T01:11:06.938821741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.530906643s" Jan 23 01:11:06.939130 containerd[1977]: time="2026-01-23T01:11:06.938856182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:11:06.953529 containerd[1977]: time="2026-01-23T01:11:06.953479922Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:11:06.977254 containerd[1977]: time="2026-01-23T01:11:06.973610633Z" level=info msg="Container 0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:06.980517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723897733.mount: Deactivated successfully. Jan 23 01:11:07.001888 containerd[1977]: time="2026-01-23T01:11:07.001821804Z" level=info msg="CreateContainer within sandbox \"9a348fddb256ae2db461ec3d148be580a5b0c5262ff17cc1952e313f6940ef59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff\"" Jan 23 01:11:07.002648 containerd[1977]: time="2026-01-23T01:11:07.002580567Z" level=info msg="StartContainer for \"0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff\"" Jan 23 01:11:07.004971 containerd[1977]: time="2026-01-23T01:11:07.004929466Z" level=info msg="connecting to shim 0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff" address="unix:///run/containerd/s/9bc5a4b1b2fe3b8c6d76df8c48f14f297e2a0b2242ef6fdec004a09794712a33" protocol=ttrpc version=3 Jan 23 01:11:07.065484 systemd[1]: Started cri-containerd-0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff.scope - libcontainer container 0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff. Jan 23 01:11:07.175485 containerd[1977]: time="2026-01-23T01:11:07.175359966Z" level=info msg="StartContainer for \"0e511c7d6d0c7040e3f0ece93e7868cdb912704a690b564c0bf59090f1e510ff\" returns successfully" Jan 23 01:11:07.233613 kubelet[2484]: E0123 01:11:07.233561 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:07.307630 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:11:07.307791 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:11:07.727884 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:11:08.234959 kubelet[2484]: E0123 01:11:08.234880 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:09.215368 kubelet[2484]: E0123 01:11:09.215314 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:09.236102 kubelet[2484]: E0123 01:11:09.236054 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:09.261714 (udev-worker)[3048]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:09.264920 systemd-networkd[1797]: vxlan.calico: Link UP Jan 23 01:11:09.264933 systemd-networkd[1797]: vxlan.calico: Gained carrier Jan 23 01:11:09.302109 (udev-worker)[3266]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:10.237066 kubelet[2484]: E0123 01:11:10.237006 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:10.996430 systemd-networkd[1797]: vxlan.calico: Gained IPv6LL Jan 23 01:11:11.238247 kubelet[2484]: E0123 01:11:11.238148 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:12.239149 kubelet[2484]: E0123 01:11:12.239091 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:13.239696 kubelet[2484]: E0123 01:11:13.239543 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:13.924667 ntpd[2216]: Listen normally on 6 vxlan.calico 192.168.20.192:123 Jan 23 01:11:13.924750 ntpd[2216]: Listen normally on 7 vxlan.calico [fe80::644c:bdff:fead:b8bb%3]:123 Jan 23 01:11:13.925194 ntpd[2216]: 23 Jan 01:11:13 ntpd[2216]: Listen normally on 6 vxlan.calico 192.168.20.192:123 Jan 23 01:11:13.925194 ntpd[2216]: 23 Jan 01:11:13 ntpd[2216]: Listen normally on 7 vxlan.calico [fe80::644c:bdff:fead:b8bb%3]:123 Jan 23 01:11:14.240321 kubelet[2484]: E0123 01:11:14.240191 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:15.240770 kubelet[2484]: E0123 01:11:15.240683 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:15.359491 containerd[1977]: time="2026-01-23T01:11:15.359279174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtjjp,Uid:4268a9df-0451-4ff6-8f73-e9f18c886e93,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:15.687426 (udev-worker)[3363]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:15.688994 systemd-networkd[1797]: calia29ed36c00f: Link UP Jan 23 01:11:15.689560 systemd-networkd[1797]: calia29ed36c00f: Gained carrier Jan 23 01:11:15.702544 kubelet[2484]: I0123 01:11:15.702482 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rqqws" podStartSLOduration=11.043999856 podStartE2EDuration="26.702459828s" podCreationTimestamp="2026-01-23 01:10:49 +0000 UTC" firstStartedPulling="2026-01-23 01:10:51.28108264 +0000 UTC m=+2.752454967" lastFinishedPulling="2026-01-23 01:11:06.939542631 +0000 UTC m=+18.410914939" observedRunningTime="2026-01-23 01:11:07.464504435 +0000 UTC m=+18.935876765" watchObservedRunningTime="2026-01-23 01:11:15.702459828 +0000 UTC m=+27.173832161" Jan 23 01:11:15.703772 containerd[1977]: 2026-01-23 01:11:15.475 [INFO][3344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.240-k8s-csi--node--driver--wtjjp-eth0 csi-node-driver- calico-system 4268a9df-0451-4ff6-8f73-e9f18c886e93 1080 0 2026-01-23 01:10:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.20.240 csi-node-driver-wtjjp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia29ed36c00f [] [] }} ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-" Jan 23 01:11:15.703772 containerd[1977]: 2026-01-23 01:11:15.483 [INFO][3344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.703772 containerd[1977]: 2026-01-23 01:11:15.626 [INFO][3357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" HandleID="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Workload="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.627 [INFO][3357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" HandleID="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Workload="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001256e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.20.240", "pod":"csi-node-driver-wtjjp", "timestamp":"2026-01-23 01:11:15.6266368 +0000 UTC"}, Hostname:"172.31.20.240", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.627 [INFO][3357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.627 [INFO][3357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.627 [INFO][3357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.240' Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.637 [INFO][3357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" host="172.31.20.240" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.650 [INFO][3357] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.240" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.657 [INFO][3357] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.659 [INFO][3357] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.662 [INFO][3357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:15.704005 containerd[1977]: 2026-01-23 01:11:15.662 [INFO][3357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" host="172.31.20.240" Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.663 [INFO][3357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1 Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.668 [INFO][3357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" host="172.31.20.240" Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.675 [INFO][3357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.193/26] block=192.168.20.192/26 handle="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" host="172.31.20.240" Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.675 [INFO][3357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.193/26] handle="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" host="172.31.20.240" Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.675 [INFO][3357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:15.706605 containerd[1977]: 2026-01-23 01:11:15.675 [INFO][3357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.193/26] IPv6=[] ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" HandleID="k8s-pod-network.b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Workload="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.706851 containerd[1977]: 2026-01-23 01:11:15.679 [INFO][3344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-csi--node--driver--wtjjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4268a9df-0451-4ff6-8f73-e9f18c886e93", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"", Pod:"csi-node-driver-wtjjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia29ed36c00f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:15.706971 containerd[1977]: 2026-01-23 01:11:15.679 [INFO][3344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.193/32] ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.706971 containerd[1977]: 2026-01-23 01:11:15.679 [INFO][3344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia29ed36c00f ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.706971 containerd[1977]: 2026-01-23 01:11:15.690 [INFO][3344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.707152 containerd[1977]: 2026-01-23 01:11:15.690 [INFO][3344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-csi--node--driver--wtjjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4268a9df-0451-4ff6-8f73-e9f18c886e93", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1", Pod:"csi-node-driver-wtjjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia29ed36c00f", MAC:"9e:a5:e5:ee:f0:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:15.709442 containerd[1977]: 2026-01-23 01:11:15.701 [INFO][3344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" Namespace="calico-system" Pod="csi-node-driver-wtjjp" WorkloadEndpoint="172.31.20.240-k8s-csi--node--driver--wtjjp-eth0" Jan 23 01:11:15.774923 containerd[1977]: time="2026-01-23T01:11:15.774850264Z" level=info msg="connecting to shim b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1" address="unix:///run/containerd/s/743c0a0d920c9f97b0c29de91634030c61aa14ee15b8fd36c2539a73c84aea4a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:15.812810 systemd[1]: Started cri-containerd-b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1.scope - libcontainer container b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1. Jan 23 01:11:15.847599 containerd[1977]: time="2026-01-23T01:11:15.847513135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wtjjp,Uid:4268a9df-0451-4ff6-8f73-e9f18c886e93,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2f5eb6499eb898995898f9a9842f1a0d8f07f0de51d82fa56fac2eebd08b4f1\"" Jan 23 01:11:15.850059 containerd[1977]: time="2026-01-23T01:11:15.849750274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:16.148695 containerd[1977]: time="2026-01-23T01:11:16.148646937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:16.150941 containerd[1977]: time="2026-01-23T01:11:16.150855410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:16.150941 containerd[1977]: time="2026-01-23T01:11:16.150904058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:16.151258 kubelet[2484]: E0123 01:11:16.151180 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:16.151357 kubelet[2484]: E0123 01:11:16.151280 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:16.151723 kubelet[2484]: E0123 01:11:16.151520 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:16.154583 containerd[1977]: time="2026-01-23T01:11:16.154508592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:16.240885 kubelet[2484]: E0123 01:11:16.240847 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:16.359540 containerd[1977]: time="2026-01-23T01:11:16.359477565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7wzn6,Uid:79850a2c-055d-47bd-a7df-ca05bf7f9b5e,Namespace:default,Attempt:0,}" Jan 23 01:11:16.421336 containerd[1977]: time="2026-01-23T01:11:16.420452413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:16.422860 containerd[1977]: time="2026-01-23T01:11:16.422799087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:16.422965 containerd[1977]: time="2026-01-23T01:11:16.422916581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:16.423213 kubelet[2484]: E0123 01:11:16.423169 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:16.423424 kubelet[2484]: E0123 01:11:16.423380 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:16.423816 kubelet[2484]: E0123 01:11:16.423607 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:16.424967 kubelet[2484]: E0123 01:11:16.424918 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:16.462951 kubelet[2484]: E0123 01:11:16.462912 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:16.500952 (udev-worker)[3365]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:16.502061 systemd-networkd[1797]: calif2e8fdccb41: Link UP Jan 23 01:11:16.502885 systemd-networkd[1797]: calif2e8fdccb41: Gained carrier Jan 23 01:11:16.513434 containerd[1977]: 2026-01-23 01:11:16.409 [INFO][3424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0 nginx-deployment-7fcdb87857- default 79850a2c-055d-47bd-a7df-ca05bf7f9b5e 1185 0 2026-01-23 01:11:04 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.240 nginx-deployment-7fcdb87857-7wzn6 eth0 default [] [] [kns.default ksa.default.default] calif2e8fdccb41 [] [] }} ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-" Jan 23 01:11:16.513434 containerd[1977]: 2026-01-23 01:11:16.409 [INFO][3424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.513434 containerd[1977]: 2026-01-23 01:11:16.446 [INFO][3436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" HandleID="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Workload="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.446 [INFO][3436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" HandleID="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Workload="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.240", "pod":"nginx-deployment-7fcdb87857-7wzn6", "timestamp":"2026-01-23 01:11:16.44641016 +0000 UTC"}, Hostname:"172.31.20.240", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.446 [INFO][3436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.446 [INFO][3436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.446 [INFO][3436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.240' Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.453 [INFO][3436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" host="172.31.20.240" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.461 [INFO][3436] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.240" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.466 [INFO][3436] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.471 [INFO][3436] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.479 [INFO][3436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:16.513667 containerd[1977]: 2026-01-23 01:11:16.479 [INFO][3436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" host="172.31.20.240" Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.482 [INFO][3436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1 Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.487 [INFO][3436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" host="172.31.20.240" Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.495 [INFO][3436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.194/26] block=192.168.20.192/26 handle="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" host="172.31.20.240" Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.496 [INFO][3436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.194/26] handle="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" host="172.31.20.240" Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.496 [INFO][3436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:16.513920 containerd[1977]: 2026-01-23 01:11:16.496 [INFO][3436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.194/26] IPv6=[] ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" HandleID="k8s-pod-network.9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Workload="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.514057 containerd[1977]: 2026-01-23 01:11:16.498 [INFO][3424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"79850a2c-055d-47bd-a7df-ca05bf7f9b5e", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-7wzn6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2e8fdccb41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:16.514057 containerd[1977]: 2026-01-23 01:11:16.498 [INFO][3424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.194/32] ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.514164 containerd[1977]: 2026-01-23 01:11:16.498 [INFO][3424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2e8fdccb41 ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.514164 containerd[1977]: 2026-01-23 01:11:16.501 [INFO][3424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.514233 containerd[1977]: 2026-01-23 01:11:16.502 [INFO][3424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"79850a2c-055d-47bd-a7df-ca05bf7f9b5e", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1", Pod:"nginx-deployment-7fcdb87857-7wzn6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif2e8fdccb41", MAC:"9e:bb:fa:1e:6d:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:16.514293 containerd[1977]: 2026-01-23 01:11:16.510 [INFO][3424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" Namespace="default" Pod="nginx-deployment-7fcdb87857-7wzn6" WorkloadEndpoint="172.31.20.240-k8s-nginx--deployment--7fcdb87857--7wzn6-eth0" Jan 23 01:11:16.560420 containerd[1977]: time="2026-01-23T01:11:16.560331997Z" level=info msg="connecting to shim 9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1" address="unix:///run/containerd/s/f1b4de2fd185d0162d6eb47543ca6d2116392b85ee6884d6a1e62dca1d6dffde" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:16.598500 systemd[1]: Started cri-containerd-9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1.scope - libcontainer container 9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1. Jan 23 01:11:16.661774 containerd[1977]: time="2026-01-23T01:11:16.661723612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7wzn6,Uid:79850a2c-055d-47bd-a7df-ca05bf7f9b5e,Namespace:default,Attempt:0,} returns sandbox id \"9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1\"" Jan 23 01:11:16.664122 containerd[1977]: time="2026-01-23T01:11:16.664082550Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:11:17.241385 kubelet[2484]: E0123 01:11:17.241324 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:17.464540 kubelet[2484]: E0123 01:11:17.464467 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:17.716453 systemd-networkd[1797]: calia29ed36c00f: Gained IPv6LL Jan 23 01:11:18.228626 systemd-networkd[1797]: calif2e8fdccb41: Gained IPv6LL Jan 23 01:11:18.241691 kubelet[2484]: E0123 01:11:18.241577 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:19.242388 kubelet[2484]: E0123 01:11:19.242320 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:19.353721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223616912.mount: Deactivated successfully. Jan 23 01:11:20.242656 kubelet[2484]: E0123 01:11:20.242600 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:20.622273 containerd[1977]: time="2026-01-23T01:11:20.622205459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:20.623595 containerd[1977]: time="2026-01-23T01:11:20.623547190Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 01:11:20.625278 containerd[1977]: time="2026-01-23T01:11:20.625201121Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:20.627879 containerd[1977]: time="2026-01-23T01:11:20.627829570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:20.628935 containerd[1977]: time="2026-01-23T01:11:20.628884458Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.964765177s" Jan 23 01:11:20.628935 containerd[1977]: time="2026-01-23T01:11:20.628922078Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:11:20.631835 containerd[1977]: time="2026-01-23T01:11:20.630999959Z" level=info msg="CreateContainer within sandbox \"9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 01:11:20.641809 containerd[1977]: time="2026-01-23T01:11:20.641774721Z" level=info msg="Container 0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:20.652112 containerd[1977]: time="2026-01-23T01:11:20.652058729Z" level=info msg="CreateContainer within sandbox \"9bc85546c6d7885f31d1f203b2c022eef0ea4f699540a2eb4f2d4ba58b4fd6d1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd\"" Jan 23 01:11:20.653307 containerd[1977]: time="2026-01-23T01:11:20.653218748Z" level=info msg="StartContainer for \"0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd\"" Jan 23 01:11:20.654444 containerd[1977]: time="2026-01-23T01:11:20.654403994Z" level=info msg="connecting to shim 0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd" address="unix:///run/containerd/s/f1b4de2fd185d0162d6eb47543ca6d2116392b85ee6884d6a1e62dca1d6dffde" protocol=ttrpc version=3 Jan 23 01:11:20.691568 systemd[1]: Started cri-containerd-0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd.scope - libcontainer container 0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd. Jan 23 01:11:20.752890 containerd[1977]: time="2026-01-23T01:11:20.751414968Z" level=info msg="StartContainer for \"0d5d3fdc852030b2c4ec8e7603064c7b3dbd07b4154d6b7c31929a91e62a5efd\" returns successfully" Jan 23 01:11:20.924694 ntpd[2216]: Listen normally on 8 calia29ed36c00f [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:11:20.925109 ntpd[2216]: 23 Jan 01:11:20 ntpd[2216]: Listen normally on 8 calia29ed36c00f [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:11:20.925109 ntpd[2216]: 23 Jan 01:11:20 ntpd[2216]: Listen normally on 9 calif2e8fdccb41 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:11:20.924744 ntpd[2216]: Listen normally on 9 calif2e8fdccb41 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:11:21.243171 kubelet[2484]: E0123 01:11:21.243037 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:21.484802 kubelet[2484]: I0123 01:11:21.484736 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-7wzn6" podStartSLOduration=13.518265677 podStartE2EDuration="17.484720455s" podCreationTimestamp="2026-01-23 01:11:04 +0000 UTC" firstStartedPulling="2026-01-23 01:11:16.663452847 +0000 UTC m=+28.134825157" lastFinishedPulling="2026-01-23 01:11:20.629907625 +0000 UTC m=+32.101279935" observedRunningTime="2026-01-23 01:11:21.484597705 +0000 UTC m=+32.955970043" watchObservedRunningTime="2026-01-23 01:11:21.484720455 +0000 UTC m=+32.956092776" Jan 23 01:11:21.937605 update_engine[1951]: I20260123 01:11:21.937503 1951 update_attempter.cc:509] Updating boot flags... Jan 23 01:11:22.244414 kubelet[2484]: E0123 01:11:22.243996 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:23.244890 kubelet[2484]: E0123 01:11:23.244832 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:24.245206 kubelet[2484]: E0123 01:11:24.245147 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:25.245575 kubelet[2484]: E0123 01:11:25.245518 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:26.246057 kubelet[2484]: E0123 01:11:26.245998 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:27.247033 kubelet[2484]: E0123 01:11:27.246980 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:27.368008 systemd[1]: Created slice kubepods-besteffort-pod942a0c3b_4881_489a_8f74_d56eff112b01.slice - libcontainer container kubepods-besteffort-pod942a0c3b_4881_489a_8f74_d56eff112b01.slice. Jan 23 01:11:27.421178 kubelet[2484]: I0123 01:11:27.421099 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/942a0c3b-4881-489a-8f74-d56eff112b01-data\") pod \"nfs-server-provisioner-0\" (UID: \"942a0c3b-4881-489a-8f74-d56eff112b01\") " pod="default/nfs-server-provisioner-0" Jan 23 01:11:27.421330 kubelet[2484]: I0123 01:11:27.421199 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrkr4\" (UniqueName: \"kubernetes.io/projected/942a0c3b-4881-489a-8f74-d56eff112b01-kube-api-access-lrkr4\") pod \"nfs-server-provisioner-0\" (UID: \"942a0c3b-4881-489a-8f74-d56eff112b01\") " pod="default/nfs-server-provisioner-0" Jan 23 01:11:27.671811 containerd[1977]: time="2026-01-23T01:11:27.671767882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:942a0c3b-4881-489a-8f74-d56eff112b01,Namespace:default,Attempt:0,}" Jan 23 01:11:27.809973 systemd-networkd[1797]: cali60e51b789ff: Link UP Jan 23 01:11:27.811154 systemd-networkd[1797]: cali60e51b789ff: Gained carrier Jan 23 01:11:27.814195 (udev-worker)[3884]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:27.824982 containerd[1977]: 2026-01-23 01:11:27.716 [INFO][3866] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.240-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 942a0c3b-4881-489a-8f74-d56eff112b01 1340 0 2026-01-23 01:11:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.20.240 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-" Jan 23 01:11:27.824982 containerd[1977]: 2026-01-23 01:11:27.716 [INFO][3866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.824982 containerd[1977]: 2026-01-23 01:11:27.754 [INFO][3878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" HandleID="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Workload="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.754 [INFO][3878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" HandleID="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Workload="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.240", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-23 01:11:27.75434099 +0000 UTC"}, Hostname:"172.31.20.240", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.754 [INFO][3878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.754 [INFO][3878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.754 [INFO][3878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.240' Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.762 [INFO][3878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" host="172.31.20.240" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.767 [INFO][3878] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.240" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.780 [INFO][3878] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.782 [INFO][3878] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.785 [INFO][3878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:27.825551 containerd[1977]: 2026-01-23 01:11:27.785 [INFO][3878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" host="172.31.20.240" Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.791 [INFO][3878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.796 [INFO][3878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" host="172.31.20.240" Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.804 [INFO][3878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.195/26] block=192.168.20.192/26 handle="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" host="172.31.20.240" Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.804 [INFO][3878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.195/26] handle="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" host="172.31.20.240" Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.804 [INFO][3878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:27.825816 containerd[1977]: 2026-01-23 01:11:27.804 [INFO][3878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.195/26] IPv6=[] ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" HandleID="k8s-pod-network.c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Workload="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.825961 containerd[1977]: 2026-01-23 01:11:27.806 [INFO][3866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"942a0c3b-4881-489a-8f74-d56eff112b01", ResourceVersion:"1340", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:27.825961 containerd[1977]: 2026-01-23 01:11:27.806 [INFO][3866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.195/32] ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.825961 containerd[1977]: 2026-01-23 01:11:27.806 [INFO][3866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.825961 containerd[1977]: 2026-01-23 01:11:27.811 [INFO][3866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.826114 containerd[1977]: 2026-01-23 01:11:27.812 [INFO][3866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"942a0c3b-4881-489a-8f74-d56eff112b01", ResourceVersion:"1340", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"8e:a9:86:59:58:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:27.826114 containerd[1977]: 2026-01-23 01:11:27.823 [INFO][3866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.240-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:27.865022 containerd[1977]: time="2026-01-23T01:11:27.864969296Z" level=info msg="connecting to shim c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff" address="unix:///run/containerd/s/55386b49aaa93c41c69099a52560818782cf5770d2a700d4131bd3d2235363a9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:27.896540 systemd[1]: Started cri-containerd-c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff.scope - libcontainer container c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff. Jan 23 01:11:27.953966 containerd[1977]: time="2026-01-23T01:11:27.953844838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:942a0c3b-4881-489a-8f74-d56eff112b01,Namespace:default,Attempt:0,} returns sandbox id \"c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff\"" Jan 23 01:11:27.968208 containerd[1977]: time="2026-01-23T01:11:27.968018037Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 01:11:28.247431 kubelet[2484]: E0123 01:11:28.247263 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:29.173041 systemd-networkd[1797]: cali60e51b789ff: Gained IPv6LL Jan 23 01:11:29.214626 kubelet[2484]: E0123 01:11:29.214540 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:29.248755 kubelet[2484]: E0123 01:11:29.248698 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:30.249261 kubelet[2484]: E0123 01:11:30.249043 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:30.405199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42555013.mount: Deactivated successfully. Jan 23 01:11:31.250494 kubelet[2484]: E0123 01:11:31.250453 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:31.924701 ntpd[2216]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:11:31.925567 ntpd[2216]: 23 Jan 01:11:31 ntpd[2216]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:11:32.251373 kubelet[2484]: E0123 01:11:32.251275 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:32.634952 containerd[1977]: time="2026-01-23T01:11:32.634888577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:32.636237 containerd[1977]: time="2026-01-23T01:11:32.636183872Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 23 01:11:32.638021 containerd[1977]: time="2026-01-23T01:11:32.637960702Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:32.640776 containerd[1977]: time="2026-01-23T01:11:32.640711841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:32.641587 containerd[1977]: time="2026-01-23T01:11:32.641474569Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.673417116s" Jan 23 01:11:32.641587 containerd[1977]: time="2026-01-23T01:11:32.641504558Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 01:11:32.642818 containerd[1977]: time="2026-01-23T01:11:32.642791830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:32.645380 containerd[1977]: time="2026-01-23T01:11:32.645338763Z" level=info msg="CreateContainer within sandbox \"c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 01:11:32.659877 containerd[1977]: time="2026-01-23T01:11:32.656963451Z" level=info msg="Container 9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:32.671179 containerd[1977]: time="2026-01-23T01:11:32.671127325Z" level=info msg="CreateContainer within sandbox \"c1ed2ac34c8e0300741b6851cb2b24a9fb641d57586a200f15328355a18978ff\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b\"" Jan 23 01:11:32.672137 containerd[1977]: time="2026-01-23T01:11:32.672090953Z" level=info msg="StartContainer for \"9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b\"" Jan 23 01:11:32.673647 containerd[1977]: time="2026-01-23T01:11:32.673606662Z" level=info msg="connecting to shim 9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b" address="unix:///run/containerd/s/55386b49aaa93c41c69099a52560818782cf5770d2a700d4131bd3d2235363a9" protocol=ttrpc version=3 Jan 23 01:11:32.702480 systemd[1]: Started cri-containerd-9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b.scope - libcontainer container 9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b. Jan 23 01:11:32.748680 containerd[1977]: time="2026-01-23T01:11:32.748562062Z" level=info msg="StartContainer for \"9d939fcd4549cf62a98e76b8b084139d644b726647b6a1253190e953e4149a4b\" returns successfully" Jan 23 01:11:32.938895 containerd[1977]: time="2026-01-23T01:11:32.938753582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:32.940283 containerd[1977]: time="2026-01-23T01:11:32.940207649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:32.940580 containerd[1977]: time="2026-01-23T01:11:32.940309925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:32.940668 kubelet[2484]: E0123 01:11:32.940623 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:32.940715 kubelet[2484]: E0123 01:11:32.940674 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:32.943471 kubelet[2484]: E0123 01:11:32.943414 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:32.945551 containerd[1977]: time="2026-01-23T01:11:32.945365869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:33.214516 containerd[1977]: time="2026-01-23T01:11:33.214395649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:33.215945 containerd[1977]: time="2026-01-23T01:11:33.215899328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:33.216088 containerd[1977]: time="2026-01-23T01:11:33.215916291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:33.216180 kubelet[2484]: E0123 01:11:33.216141 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:33.216244 kubelet[2484]: E0123 01:11:33.216187 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:33.216369 kubelet[2484]: E0123 01:11:33.216317 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:33.218055 kubelet[2484]: E0123 01:11:33.217853 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:33.251504 kubelet[2484]: E0123 01:11:33.251458 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:33.561387 kubelet[2484]: I0123 01:11:33.561211 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.875094119 podStartE2EDuration="6.561176586s" podCreationTimestamp="2026-01-23 01:11:27 +0000 UTC" firstStartedPulling="2026-01-23 01:11:27.95656412 +0000 UTC m=+39.427936431" lastFinishedPulling="2026-01-23 01:11:32.642646578 +0000 UTC m=+44.114018898" observedRunningTime="2026-01-23 01:11:33.560982702 +0000 UTC m=+45.032355033" watchObservedRunningTime="2026-01-23 01:11:33.561176586 +0000 UTC m=+45.032548907" Jan 23 01:11:34.252726 kubelet[2484]: E0123 01:11:34.252583 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:35.253473 kubelet[2484]: E0123 01:11:35.253415 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:36.254434 kubelet[2484]: E0123 01:11:36.254387 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:37.255648 kubelet[2484]: E0123 01:11:37.255569 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:38.256160 kubelet[2484]: E0123 01:11:38.256104 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:39.256947 kubelet[2484]: E0123 01:11:39.256895 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:40.257367 kubelet[2484]: E0123 01:11:40.257322 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:41.257995 kubelet[2484]: E0123 01:11:41.257938 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:42.258883 kubelet[2484]: E0123 01:11:42.258825 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:43.259402 kubelet[2484]: E0123 01:11:43.259358 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:44.260096 kubelet[2484]: E0123 01:11:44.260050 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:45.261036 kubelet[2484]: E0123 01:11:45.260988 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:46.262206 kubelet[2484]: E0123 01:11:46.262125 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:47.262790 kubelet[2484]: E0123 01:11:47.262728 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:48.262951 kubelet[2484]: E0123 01:11:48.262896 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:48.359202 kubelet[2484]: E0123 01:11:48.359151 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:11:49.214772 kubelet[2484]: E0123 01:11:49.214469 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:49.263500 kubelet[2484]: E0123 01:11:49.263429 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:50.263784 kubelet[2484]: E0123 01:11:50.263714 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:51.264513 kubelet[2484]: E0123 01:11:51.264436 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:52.264811 kubelet[2484]: E0123 01:11:52.264737 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:53.265525 kubelet[2484]: E0123 01:11:53.265466 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:54.266359 kubelet[2484]: E0123 01:11:54.266304 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:55.267473 kubelet[2484]: E0123 01:11:55.267402 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:56.268660 kubelet[2484]: E0123 01:11:56.268602 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:57.269239 kubelet[2484]: E0123 01:11:57.269166 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:57.583095 systemd[1]: Created slice kubepods-besteffort-pod54bef4dd_491c_4ef6_8672_5e0a393e6280.slice - libcontainer container kubepods-besteffort-pod54bef4dd_491c_4ef6_8672_5e0a393e6280.slice. Jan 23 01:11:57.717295 kubelet[2484]: I0123 01:11:57.717186 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d4930cc-4623-4e3e-8c79-25a57529fcc9\" (UniqueName: \"kubernetes.io/nfs/54bef4dd-491c-4ef6-8672-5e0a393e6280-pvc-3d4930cc-4623-4e3e-8c79-25a57529fcc9\") pod \"test-pod-1\" (UID: \"54bef4dd-491c-4ef6-8672-5e0a393e6280\") " pod="default/test-pod-1" Jan 23 01:11:57.717463 kubelet[2484]: I0123 01:11:57.717337 2484 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvxqp\" (UniqueName: \"kubernetes.io/projected/54bef4dd-491c-4ef6-8672-5e0a393e6280-kube-api-access-bvxqp\") pod \"test-pod-1\" (UID: \"54bef4dd-491c-4ef6-8672-5e0a393e6280\") " pod="default/test-pod-1" Jan 23 01:11:57.869552 kernel: netfs: FS-Cache loaded Jan 23 01:11:57.943161 kernel: RPC: Registered named UNIX socket transport module. Jan 23 01:11:57.943313 kernel: RPC: Registered udp transport module. Jan 23 01:11:57.943352 kernel: RPC: Registered tcp transport module. Jan 23 01:11:57.943378 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 01:11:57.943406 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 01:11:58.193470 kernel: NFS: Registering the id_resolver key type Jan 23 01:11:58.193604 kernel: Key type id_resolver registered Jan 23 01:11:58.193636 kernel: Key type id_legacy registered Jan 23 01:11:58.229677 nfsidmap[4106]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:11:58.231443 nfsidmap[4106]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:11:58.234604 nfsidmap[4107]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:11:58.234847 nfsidmap[4107]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:11:58.255801 nfsrahead[4109]: setting /var/lib/kubelet/pods/54bef4dd-491c-4ef6-8672-5e0a393e6280/volumes/kubernetes.io~nfs/pvc-3d4930cc-4623-4e3e-8c79-25a57529fcc9 readahead to 128 Jan 23 01:11:58.269572 kubelet[2484]: E0123 01:11:58.269506 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:58.487085 containerd[1977]: time="2026-01-23T01:11:58.486968033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:54bef4dd-491c-4ef6-8672-5e0a393e6280,Namespace:default,Attempt:0,}" Jan 23 01:11:58.644760 systemd-networkd[1797]: cali5ec59c6bf6e: Link UP Jan 23 01:11:58.644924 systemd-networkd[1797]: cali5ec59c6bf6e: Gained carrier Jan 23 01:11:58.645004 (udev-worker)[4093]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.548 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.240-k8s-test--pod--1-eth0 default 54bef4dd-491c-4ef6-8672-5e0a393e6280 1518 0 2026-01-23 01:11:28 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.240 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.549 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.580 [INFO][4122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" HandleID="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Workload="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.580 [INFO][4122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" HandleID="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Workload="172.31.20.240-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.240", "pod":"test-pod-1", "timestamp":"2026-01-23 01:11:58.580819718 +0000 UTC"}, Hostname:"172.31.20.240", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.581 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.581 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.581 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.240' Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.593 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.601 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.607 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.609 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.614 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.614 [INFO][4122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.618 [INFO][4122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943 Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.628 [INFO][4122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.640 [INFO][4122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.20.196/26] block=192.168.20.192/26 handle="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.640 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.196/26] handle="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" host="172.31.20.240" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.640 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.640 [INFO][4122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.20.196/26] IPv6=[] ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" HandleID="k8s-pod-network.f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Workload="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.656920 containerd[1977]: 2026-01-23 01:11:58.642 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"54bef4dd-491c-4ef6-8672-5e0a393e6280", ResourceVersion:"1518", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.657800 containerd[1977]: 2026-01-23 01:11:58.642 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.196/32] ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.657800 containerd[1977]: 2026-01-23 01:11:58.642 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.657800 containerd[1977]: 2026-01-23 01:11:58.644 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.657800 containerd[1977]: 2026-01-23 01:11:58.644 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.240-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"54bef4dd-491c-4ef6-8672-5e0a393e6280", ResourceVersion:"1518", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.240", ContainerID:"f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"16:92:f0:44:a1:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:58.657800 containerd[1977]: 2026-01-23 01:11:58.654 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.240-k8s-test--pod--1-eth0" Jan 23 01:11:58.706832 containerd[1977]: time="2026-01-23T01:11:58.706723799Z" level=info msg="connecting to shim f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943" address="unix:///run/containerd/s/ad1b0a023db43cb775e0b9f8390811b1aab75845737d044ec8b54f2debcc1034" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:58.735465 systemd[1]: Started cri-containerd-f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943.scope - libcontainer container f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943. Jan 23 01:11:58.800731 containerd[1977]: time="2026-01-23T01:11:58.800537520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:54bef4dd-491c-4ef6-8672-5e0a393e6280,Namespace:default,Attempt:0,} returns sandbox id \"f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943\"" Jan 23 01:11:58.807929 containerd[1977]: time="2026-01-23T01:11:58.807887716Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:11:59.171595 containerd[1977]: time="2026-01-23T01:11:59.171541725Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:59.172592 containerd[1977]: time="2026-01-23T01:11:59.172557626Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 01:11:59.175239 containerd[1977]: time="2026-01-23T01:11:59.175176067Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 367.25051ms" Jan 23 01:11:59.175239 containerd[1977]: time="2026-01-23T01:11:59.175213068Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:11:59.177814 containerd[1977]: time="2026-01-23T01:11:59.176963326Z" level=info msg="CreateContainer within sandbox \"f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 01:11:59.237066 containerd[1977]: time="2026-01-23T01:11:59.237032869Z" level=info msg="Container 23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:59.252397 containerd[1977]: time="2026-01-23T01:11:59.252309129Z" level=info msg="CreateContainer within sandbox \"f70ff8732905b226eb4090b9796a9714ca816a9005a95acf5042cd264527e943\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438\"" Jan 23 01:11:59.253106 containerd[1977]: time="2026-01-23T01:11:59.253057871Z" level=info msg="StartContainer for \"23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438\"" Jan 23 01:11:59.254071 containerd[1977]: time="2026-01-23T01:11:59.254012793Z" level=info msg="connecting to shim 23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438" address="unix:///run/containerd/s/ad1b0a023db43cb775e0b9f8390811b1aab75845737d044ec8b54f2debcc1034" protocol=ttrpc version=3 Jan 23 01:11:59.270215 kubelet[2484]: E0123 01:11:59.270173 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:59.275218 systemd[1]: Started cri-containerd-23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438.scope - libcontainer container 23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438. Jan 23 01:11:59.316577 containerd[1977]: time="2026-01-23T01:11:59.316501914Z" level=info msg="StartContainer for \"23c9f9897fe7d868a28e18b6ddd709827ceb14fe27a455679e04ad12af904438\" returns successfully" Jan 23 01:11:59.892493 systemd-networkd[1797]: cali5ec59c6bf6e: Gained IPv6LL Jan 23 01:12:00.270467 kubelet[2484]: E0123 01:12:00.270330 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:00.359676 containerd[1977]: time="2026-01-23T01:12:00.359631860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:00.378661 kubelet[2484]: I0123 01:12:00.378588 2484 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.005676885 podStartE2EDuration="32.378554766s" podCreationTimestamp="2026-01-23 01:11:28 +0000 UTC" firstStartedPulling="2026-01-23 01:11:58.802900049 +0000 UTC m=+70.274272374" lastFinishedPulling="2026-01-23 01:11:59.175777944 +0000 UTC m=+70.647150255" observedRunningTime="2026-01-23 01:11:59.64451951 +0000 UTC m=+71.115891839" watchObservedRunningTime="2026-01-23 01:12:00.378554766 +0000 UTC m=+71.849927099" Jan 23 01:12:00.632960 containerd[1977]: time="2026-01-23T01:12:00.632898758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:00.634344 containerd[1977]: time="2026-01-23T01:12:00.634236476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:00.634344 containerd[1977]: time="2026-01-23T01:12:00.634314497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:00.634658 kubelet[2484]: E0123 01:12:00.634610 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:00.634749 kubelet[2484]: E0123 01:12:00.634664 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:00.634859 kubelet[2484]: E0123 01:12:00.634803 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:00.637433 containerd[1977]: time="2026-01-23T01:12:00.637390037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:00.944374 containerd[1977]: time="2026-01-23T01:12:00.943944216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:00.945784 containerd[1977]: time="2026-01-23T01:12:00.945734144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:00.945892 containerd[1977]: time="2026-01-23T01:12:00.945757373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:00.946012 kubelet[2484]: E0123 01:12:00.945969 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:00.946080 kubelet[2484]: E0123 01:12:00.946019 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:00.946169 kubelet[2484]: E0123 01:12:00.946135 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:00.947622 kubelet[2484]: E0123 01:12:00.947571 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:12:01.271549 kubelet[2484]: E0123 01:12:01.271347 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:01.924616 ntpd[2216]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:12:01.925062 ntpd[2216]: 23 Jan 01:12:01 ntpd[2216]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:12:02.272773 kubelet[2484]: E0123 01:12:02.272600 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:03.273879 kubelet[2484]: E0123 01:12:03.273807 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:04.275414 kubelet[2484]: E0123 01:12:04.275036 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:05.276529 kubelet[2484]: E0123 01:12:05.276345 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:06.276829 kubelet[2484]: E0123 01:12:06.276772 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:07.277244 kubelet[2484]: E0123 01:12:07.277184 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:08.277866 kubelet[2484]: E0123 01:12:08.277818 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:09.215371 kubelet[2484]: E0123 01:12:09.215326 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:09.278496 kubelet[2484]: E0123 01:12:09.278436 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:10.279133 kubelet[2484]: E0123 01:12:10.279074 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:11.279869 kubelet[2484]: E0123 01:12:11.279784 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:12.280247 kubelet[2484]: E0123 01:12:12.280152 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:13.280837 kubelet[2484]: E0123 01:12:13.280777 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:14.281998 kubelet[2484]: E0123 01:12:14.281938 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:15.283106 kubelet[2484]: E0123 01:12:15.283048 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:15.359585 kubelet[2484]: E0123 01:12:15.359515 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:12:16.284031 kubelet[2484]: E0123 01:12:16.283951 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:17.284577 kubelet[2484]: E0123 01:12:17.284497 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:18.284706 kubelet[2484]: E0123 01:12:18.284641 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:19.284956 kubelet[2484]: E0123 01:12:19.284904 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:20.286069 kubelet[2484]: E0123 01:12:20.286021 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:21.286444 kubelet[2484]: E0123 01:12:21.286384 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:21.590982 kubelet[2484]: E0123 01:12:21.590912 2484 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": context deadline exceeded" Jan 23 01:12:22.287281 kubelet[2484]: E0123 01:12:22.287212 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:23.287828 kubelet[2484]: E0123 01:12:23.287782 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:24.289008 kubelet[2484]: E0123 01:12:24.288948 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:25.290138 kubelet[2484]: E0123 01:12:25.290101 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:26.290627 kubelet[2484]: E0123 01:12:26.290571 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:27.291310 kubelet[2484]: E0123 01:12:27.291250 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:27.359573 kubelet[2484]: E0123 01:12:27.359532 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:12:28.292419 kubelet[2484]: E0123 01:12:28.292356 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:29.215184 kubelet[2484]: E0123 01:12:29.215112 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:29.293445 kubelet[2484]: E0123 01:12:29.293387 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:30.294194 kubelet[2484]: E0123 01:12:30.294146 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:31.294708 kubelet[2484]: E0123 01:12:31.294663 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:31.591994 kubelet[2484]: E0123 01:12:31.591916 2484 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 01:12:32.294937 kubelet[2484]: E0123 01:12:32.294891 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:33.295651 kubelet[2484]: E0123 01:12:33.295589 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:34.296486 kubelet[2484]: E0123 01:12:34.296429 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:35.297321 kubelet[2484]: E0123 01:12:35.297268 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:36.298032 kubelet[2484]: E0123 01:12:36.297983 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:37.298354 kubelet[2484]: E0123 01:12:37.298309 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:37.451475 kubelet[2484]: E0123 01:12:37.450090 2484 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.17.109:6443/api/v1/namespaces/calico-system/events/csi-node-driver-wtjjp.188d36fa1bc3af1f\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-wtjjp.188d36fa1bc3af1f calico-system 1466 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-wtjjp,UID:4268a9df-0451-4ff6-8f73-e9f18c886e93,APIVersion:v1,ResourceVersion:1065,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.20.240,},FirstTimestamp:2026-01-23 01:11:16 +0000 UTC,LastTimestamp:2026-01-23 01:12:15.358687162 +0000 UTC m=+86.830059483,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.240,}" Jan 23 01:12:37.451475 kubelet[2484]: E0123 01:12:37.450214 2484 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": unexpected EOF" Jan 23 01:12:37.458285 kubelet[2484]: E0123 01:12:37.458237 2484 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.17.109:6443: connect: connection reset by peer" Jan 23 01:12:37.461855 kubelet[2484]: E0123 01:12:37.460840 2484 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.17.109:6443: connect: connection refused" Jan 23 01:12:37.461855 kubelet[2484]: I0123 01:12:37.460899 2484 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 01:12:37.464745 kubelet[2484]: E0123 01:12:37.461920 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.17.109:6443: connect: connection refused" interval="200ms" Jan 23 01:12:37.663102 kubelet[2484]: E0123 01:12:37.663054 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.17.109:6443: connect: connection refused" interval="400ms" Jan 23 01:12:38.064976 kubelet[2484]: E0123 01:12:38.064852 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": dial tcp 172.31.17.109:6443: connect: connection refused" interval="800ms" Jan 23 01:12:38.299028 kubelet[2484]: E0123 01:12:38.298964 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:38.360308 kubelet[2484]: E0123 01:12:38.360251 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:12:38.456140 kubelet[2484]: I0123 01:12:38.454267 2484 status_manager.go:890] "Failed to get status for pod" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" pod="calico-system/csi-node-driver-wtjjp" err="Get \"https://172.31.17.109:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-wtjjp\": dial tcp 172.31.17.109:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 23 01:12:38.456743 kubelet[2484]: I0123 01:12:38.456695 2484 status_manager.go:890] "Failed to get status for pod" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" pod="calico-system/csi-node-driver-wtjjp" err="Get \"https://172.31.17.109:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-wtjjp\": dial tcp 172.31.17.109:6443: connect: connection refused" Jan 23 01:12:38.457122 kubelet[2484]: I0123 01:12:38.457096 2484 status_manager.go:890] "Failed to get status for pod" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" pod="calico-system/csi-node-driver-wtjjp" err="Get \"https://172.31.17.109:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-wtjjp\": dial tcp 172.31.17.109:6443: connect: connection refused" Jan 23 01:12:39.299716 kubelet[2484]: E0123 01:12:39.299659 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:40.299897 kubelet[2484]: E0123 01:12:40.299846 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:41.300209 kubelet[2484]: E0123 01:12:41.300136 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:42.300625 kubelet[2484]: E0123 01:12:42.300535 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:43.301051 kubelet[2484]: E0123 01:12:43.300982 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:44.301831 kubelet[2484]: E0123 01:12:44.301776 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:45.302580 kubelet[2484]: E0123 01:12:45.302537 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:46.303249 kubelet[2484]: E0123 01:12:46.303148 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:47.304089 kubelet[2484]: E0123 01:12:47.303908 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:48.304514 kubelet[2484]: E0123 01:12:48.304441 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:48.866094 kubelet[2484]: E0123 01:12:48.866022 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 23 01:12:49.214832 kubelet[2484]: E0123 01:12:49.214714 2484 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:49.304855 kubelet[2484]: E0123 01:12:49.304803 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:50.305724 kubelet[2484]: E0123 01:12:50.305653 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:51.306160 kubelet[2484]: E0123 01:12:51.306099 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:51.360170 containerd[1977]: time="2026-01-23T01:12:51.359850340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:12:51.658043 containerd[1977]: time="2026-01-23T01:12:51.658000246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:51.659451 containerd[1977]: time="2026-01-23T01:12:51.659339256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:12:51.659451 containerd[1977]: time="2026-01-23T01:12:51.659388489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:12:51.659649 kubelet[2484]: E0123 01:12:51.659603 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:51.659697 kubelet[2484]: E0123 01:12:51.659655 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:12:51.659838 kubelet[2484]: E0123 01:12:51.659768 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:51.661808 containerd[1977]: time="2026-01-23T01:12:51.661773784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:12:51.940918 containerd[1977]: time="2026-01-23T01:12:51.940793943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:12:51.942246 containerd[1977]: time="2026-01-23T01:12:51.942162850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:12:51.942476 containerd[1977]: time="2026-01-23T01:12:51.942270846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:12:51.942559 kubelet[2484]: E0123 01:12:51.942489 2484 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:51.942559 kubelet[2484]: E0123 01:12:51.942548 2484 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:12:51.942784 kubelet[2484]: E0123 01:12:51.942688 2484 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wtjjp_calico-system(4268a9df-0451-4ff6-8f73-e9f18c886e93): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:12:51.944023 kubelet[2484]: E0123 01:12:51.943963 2484 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wtjjp" podUID="4268a9df-0451-4ff6-8f73-e9f18c886e93" Jan 23 01:12:52.307247 kubelet[2484]: E0123 01:12:52.307107 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:53.307690 kubelet[2484]: E0123 01:12:53.307301 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:54.308468 kubelet[2484]: E0123 01:12:54.308409 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:55.309627 kubelet[2484]: E0123 01:12:55.309560 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:56.309854 kubelet[2484]: E0123 01:12:56.309804 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:57.311275 kubelet[2484]: E0123 01:12:57.311002 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:58.312338 kubelet[2484]: E0123 01:12:58.312285 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:12:59.312462 kubelet[2484]: E0123 01:12:59.312419 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:00.313086 kubelet[2484]: E0123 01:13:00.313003 2484 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:13:00.466454 kubelet[2484]: E0123 01:13:00.466400 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.240?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s"