Jan 24 00:32:54.903641 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:32:54.903677 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:32:54.903696 kernel: BIOS-provided physical RAM map: Jan 24 00:32:54.903707 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:32:54.903718 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:32:54.903730 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:32:54.903744 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:32:54.903778 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:32:54.903790 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:32:54.903805 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:32:54.903816 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:32:54.903829 kernel: NX (Execute Disable) protection: active Jan 24 00:32:54.903841 kernel: APIC: Static calls initialized Jan 24 00:32:54.903854 kernel: efi: EFI v2.7 by EDK II Jan 24 00:32:54.903870 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:32:54.903889 kernel: SMBIOS 2.7 present. Jan 24 00:32:54.903904 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:32:54.903917 kernel: Hypervisor detected: KVM Jan 24 00:32:54.903930 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:32:54.903943 kernel: kvm-clock: using sched offset of 3983770913 cycles Jan 24 00:32:54.903956 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:32:54.903970 kernel: tsc: Detected 2499.998 MHz processor Jan 24 00:32:54.903982 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:32:54.904000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:32:54.904018 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:32:54.904035 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:32:54.904047 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:32:54.904807 kernel: Using GB pages for direct mapping Jan 24 00:32:54.904829 kernel: Secure boot disabled Jan 24 00:32:54.904842 kernel: ACPI: Early table checksum verification disabled Jan 24 00:32:54.904853 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:32:54.904867 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:32:54.904880 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:32:54.904895 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:32:54.904914 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:32:54.904928 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:32:54.904941 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:32:54.904955 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:32:54.904968 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:32:54.904983 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:32:54.905003 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:32:54.905020 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:32:54.905034 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:32:54.905050 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:32:54.905064 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:32:54.905079 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:32:54.905093 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:32:54.905108 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:32:54.905126 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:32:54.905141 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:32:54.905155 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:32:54.905170 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:32:54.905184 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:32:54.905198 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:32:54.905213 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:32:54.905227 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:32:54.905242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:32:54.905259 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:32:54.905273 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:32:54.905288 kernel: Zone ranges: Jan 24 00:32:54.905303 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:32:54.905317 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:32:54.905331 kernel: Normal empty Jan 24 00:32:54.905356 kernel: Movable zone start for each node Jan 24 00:32:54.905370 kernel: Early memory node ranges Jan 24 00:32:54.905384 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:32:54.905402 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:32:54.905416 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:32:54.905430 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:32:54.905443 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:32:54.905455 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:32:54.905468 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:32:54.905482 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:32:54.905495 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:32:54.905508 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:32:54.905522 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:32:54.905539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:32:54.905552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:32:54.905566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:32:54.905579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:32:54.905593 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:32:54.905606 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:32:54.905619 kernel: TSC deadline timer available Jan 24 00:32:54.905633 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:32:54.905646 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:32:54.905663 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:32:54.905676 kernel: Booting paravirtualized kernel on KVM Jan 24 00:32:54.905689 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:32:54.905703 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:32:54.905716 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:32:54.905729 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:32:54.905742 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:32:54.905769 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:32:54.905782 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:32:54.905802 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:32:54.905816 kernel: random: crng init done Jan 24 00:32:54.905830 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:32:54.905843 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:32:54.905856 kernel: Fallback order for Node 0: 0 Jan 24 00:32:54.905869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:32:54.905883 kernel: Policy zone: DMA32 Jan 24 00:32:54.905896 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:32:54.905914 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 24 00:32:54.905927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:32:54.905941 kernel: Kernel/User page tables isolation: enabled Jan 24 00:32:54.905954 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:32:54.905968 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:32:54.905981 kernel: Dynamic Preempt: voluntary Jan 24 00:32:54.905994 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:32:54.906009 kernel: rcu: RCU event tracing is enabled. Jan 24 00:32:54.906023 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:32:54.906040 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:32:54.906054 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:32:54.906066 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:32:54.906080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:32:54.906093 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:32:54.906106 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:32:54.906120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:32:54.906147 kernel: Console: colour dummy device 80x25 Jan 24 00:32:54.906162 kernel: printk: console [tty0] enabled Jan 24 00:32:54.906176 kernel: printk: console [ttyS0] enabled Jan 24 00:32:54.906190 kernel: ACPI: Core revision 20230628 Jan 24 00:32:54.906204 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:32:54.906222 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:32:54.906236 kernel: x2apic enabled Jan 24 00:32:54.906250 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:32:54.906265 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:32:54.906280 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 24 00:32:54.906298 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:32:54.906312 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:32:54.906326 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:32:54.906340 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:32:54.906353 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:32:54.906367 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:32:54.906381 kernel: RETBleed: Vulnerable Jan 24 00:32:54.906395 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:32:54.906409 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:32:54.906423 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:32:54.906441 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:32:54.906455 kernel: active return thunk: its_return_thunk Jan 24 00:32:54.906469 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:32:54.906483 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:32:54.906497 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:32:54.906511 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:32:54.906525 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:32:54.906540 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:32:54.906553 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:32:54.906567 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:32:54.906581 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:32:54.906598 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:32:54.906612 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:32:54.906627 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:32:54.906641 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:32:54.906655 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:32:54.906670 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:32:54.906684 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:32:54.906698 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:32:54.906712 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:32:54.906726 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:32:54.906740 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:32:54.909802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:32:54.909823 kernel: landlock: Up and running. Jan 24 00:32:54.909838 kernel: SELinux: Initializing. Jan 24 00:32:54.909852 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:32:54.909865 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:32:54.909878 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:32:54.909892 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:32:54.909905 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:32:54.909921 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:32:54.909936 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:32:54.909957 kernel: signal: max sigframe size: 3632 Jan 24 00:32:54.909972 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:32:54.909987 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:32:54.910001 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:32:54.910016 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:32:54.910031 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:32:54.910047 kernel: .... node #0, CPUs: #1 Jan 24 00:32:54.910063 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:32:54.910079 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:32:54.910097 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:32:54.910113 kernel: smpboot: Max logical packages: 1 Jan 24 00:32:54.910128 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 24 00:32:54.910144 kernel: devtmpfs: initialized Jan 24 00:32:54.910159 kernel: x86/mm: Memory block size: 128MB Jan 24 00:32:54.910175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:32:54.910191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:32:54.910207 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:32:54.910223 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:32:54.910242 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:32:54.910257 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:32:54.910273 kernel: audit: type=2000 audit(1769214775.464:1): state=initialized audit_enabled=0 res=1 Jan 24 00:32:54.910289 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:32:54.910305 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:32:54.910321 kernel: cpuidle: using governor menu Jan 24 00:32:54.910336 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:32:54.910352 kernel: dca service started, version 1.12.1 Jan 24 00:32:54.910368 kernel: PCI: Using configuration type 1 for base access Jan 24 00:32:54.910386 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:32:54.910401 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:32:54.910417 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:32:54.910433 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:32:54.910448 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:32:54.910463 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:32:54.910479 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:32:54.910495 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:32:54.910510 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:32:54.910529 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:32:54.910545 kernel: ACPI: Interpreter enabled Jan 24 00:32:54.910560 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:32:54.910576 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:32:54.910592 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:32:54.910607 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:32:54.910623 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:32:54.910639 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:32:54.910899 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:32:54.911049 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:32:54.911180 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:32:54.911198 kernel: acpiphp: Slot [3] registered Jan 24 00:32:54.911213 kernel: acpiphp: Slot [4] registered Jan 24 00:32:54.911227 kernel: acpiphp: Slot [5] registered Jan 24 00:32:54.911241 kernel: acpiphp: Slot [6] registered Jan 24 00:32:54.911256 kernel: acpiphp: Slot [7] registered Jan 24 00:32:54.911274 kernel: acpiphp: Slot [8] registered Jan 24 00:32:54.911289 kernel: acpiphp: Slot [9] registered Jan 24 00:32:54.911303 kernel: acpiphp: Slot [10] registered Jan 24 00:32:54.911317 kernel: acpiphp: Slot [11] registered Jan 24 00:32:54.911331 kernel: acpiphp: Slot [12] registered Jan 24 00:32:54.911345 kernel: acpiphp: Slot [13] registered Jan 24 00:32:54.911360 kernel: acpiphp: Slot [14] registered Jan 24 00:32:54.911375 kernel: acpiphp: Slot [15] registered Jan 24 00:32:54.911388 kernel: acpiphp: Slot [16] registered Jan 24 00:32:54.911403 kernel: acpiphp: Slot [17] registered Jan 24 00:32:54.911421 kernel: acpiphp: Slot [18] registered Jan 24 00:32:54.911435 kernel: acpiphp: Slot [19] registered Jan 24 00:32:54.911449 kernel: acpiphp: Slot [20] registered Jan 24 00:32:54.911463 kernel: acpiphp: Slot [21] registered Jan 24 00:32:54.911477 kernel: acpiphp: Slot [22] registered Jan 24 00:32:54.911492 kernel: acpiphp: Slot [23] registered Jan 24 00:32:54.911506 kernel: acpiphp: Slot [24] registered Jan 24 00:32:54.911520 kernel: acpiphp: Slot [25] registered Jan 24 00:32:54.911535 kernel: acpiphp: Slot [26] registered Jan 24 00:32:54.911552 kernel: acpiphp: Slot [27] registered Jan 24 00:32:54.911566 kernel: acpiphp: Slot [28] registered Jan 24 00:32:54.911581 kernel: acpiphp: Slot [29] registered Jan 24 00:32:54.911595 kernel: acpiphp: Slot [30] registered Jan 24 00:32:54.911609 kernel: acpiphp: Slot [31] registered Jan 24 00:32:54.911623 kernel: PCI host bridge to bus 0000:00 Jan 24 00:32:54.912536 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:32:54.912701 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:32:54.913715 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:32:54.913874 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:32:54.914008 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:32:54.914146 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:32:54.914312 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:32:54.914471 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:32:54.914620 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:32:54.916766 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:32:54.916941 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:32:54.917088 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:32:54.917233 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:32:54.917378 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:32:54.917515 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:32:54.917646 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:32:54.917806 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:32:54.917938 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:32:54.918068 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:32:54.918198 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:32:54.918328 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:32:54.918483 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:32:54.918631 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:32:54.920874 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:32:54.921042 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:32:54.921066 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:32:54.921083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:32:54.921101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:32:54.921118 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:32:54.921135 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:32:54.921157 kernel: iommu: Default domain type: Translated Jan 24 00:32:54.921174 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:32:54.921191 kernel: efivars: Registered efivars operations Jan 24 00:32:54.921208 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:32:54.921225 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:32:54.921242 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:32:54.921259 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:32:54.921409 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:32:54.921554 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:32:54.921700 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:32:54.921722 kernel: vgaarb: loaded Jan 24 00:32:54.921739 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:32:54.921767 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:32:54.921781 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:32:54.921793 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:32:54.921807 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:32:54.921821 kernel: pnp: PnP ACPI init Jan 24 00:32:54.921841 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:32:54.921857 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:32:54.921873 kernel: NET: Registered PF_INET protocol family Jan 24 00:32:54.921888 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:32:54.921904 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:32:54.921921 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:32:54.921936 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:32:54.921952 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:32:54.921968 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:32:54.921987 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:32:54.922003 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:32:54.922018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:32:54.922034 kernel: NET: Registered PF_XDP protocol family Jan 24 00:32:54.922184 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:32:54.922311 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:32:54.922436 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:32:54.922553 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:32:54.922673 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:32:54.924579 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:32:54.924610 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:32:54.924629 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:32:54.924647 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:32:54.924664 kernel: clocksource: Switched to clocksource tsc Jan 24 00:32:54.924681 kernel: Initialise system trusted keyrings Jan 24 00:32:54.924698 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:32:54.924715 kernel: Key type asymmetric registered Jan 24 00:32:54.924736 kernel: Asymmetric key parser 'x509' registered Jan 24 00:32:54.924764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:32:54.924780 kernel: io scheduler mq-deadline registered Jan 24 00:32:54.924797 kernel: io scheduler kyber registered Jan 24 00:32:54.924813 kernel: io scheduler bfq registered Jan 24 00:32:54.924831 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:32:54.924848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:32:54.924865 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:32:54.924880 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:32:54.924897 kernel: i8042: Warning: Keylock active Jan 24 00:32:54.924912 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:32:54.924927 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:32:54.925126 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:32:54.925262 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:32:54.925468 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:32:54 UTC (1769214774) Jan 24 00:32:54.925596 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:32:54.925615 kernel: intel_pstate: CPU model not supported Jan 24 00:32:54.925636 kernel: efifb: probing for efifb Jan 24 00:32:54.925653 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:32:54.925669 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:32:54.925684 kernel: efifb: scrolling: redraw Jan 24 00:32:54.925700 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:32:54.925715 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:32:54.925731 kernel: fb0: EFI VGA frame buffer device Jan 24 00:32:54.925776 kernel: pstore: Using crash dump compression: deflate Jan 24 00:32:54.925793 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:32:54.925813 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:32:54.925829 kernel: Segment Routing with IPv6 Jan 24 00:32:54.925844 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:32:54.925860 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:32:54.925875 kernel: Key type dns_resolver registered Jan 24 00:32:54.925891 kernel: IPI shorthand broadcast: enabled Jan 24 00:32:54.925932 kernel: sched_clock: Marking stable (457002023, 126796408)->(675379036, -91580605) Jan 24 00:32:54.925953 kernel: registered taskstats version 1 Jan 24 00:32:54.925969 kernel: Loading compiled-in X.509 certificates Jan 24 00:32:54.925989 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:32:54.926005 kernel: Key type .fscrypt registered Jan 24 00:32:54.926021 kernel: Key type fscrypt-provisioning registered Jan 24 00:32:54.926037 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:32:54.926054 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:32:54.926070 kernel: ima: No architecture policies found Jan 24 00:32:54.926086 kernel: clk: Disabling unused clocks Jan 24 00:32:54.926106 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:32:54.926122 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:32:54.926142 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:32:54.926158 kernel: Run /init as init process Jan 24 00:32:54.926175 kernel: with arguments: Jan 24 00:32:54.926191 kernel: /init Jan 24 00:32:54.926207 kernel: with environment: Jan 24 00:32:54.926223 kernel: HOME=/ Jan 24 00:32:54.926239 kernel: TERM=linux Jan 24 00:32:54.926258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:32:54.926281 systemd[1]: Detected virtualization amazon. Jan 24 00:32:54.926298 systemd[1]: Detected architecture x86-64. Jan 24 00:32:54.926315 systemd[1]: Running in initrd. Jan 24 00:32:54.926331 systemd[1]: No hostname configured, using default hostname. Jan 24 00:32:54.926347 systemd[1]: Hostname set to . Jan 24 00:32:54.926365 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:32:54.926381 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:32:54.926399 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:32:54.926419 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:32:54.926437 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:32:54.926455 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:32:54.926472 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:32:54.926492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:32:54.926515 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:32:54.926532 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:32:54.926549 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:32:54.926566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:32:54.926583 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:32:54.926600 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:32:54.926618 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:32:54.926638 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:32:54.926655 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:32:54.926673 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:32:54.926690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:32:54.926708 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:32:54.926725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:32:54.926742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:32:54.928399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:32:54.928417 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:32:54.928439 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:32:54.928456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:32:54.928473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:32:54.928491 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:32:54.928509 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:32:54.928526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:32:54.928544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:32:54.928591 systemd-journald[179]: Collecting audit messages is disabled. Jan 24 00:32:54.928632 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:32:54.928650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:32:54.928668 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:32:54.928689 systemd-journald[179]: Journal started Jan 24 00:32:54.928724 systemd-journald[179]: Runtime Journal (/run/log/journal/ec24d8c75dbd28308fb42f611dcd421b) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:32:54.933676 systemd-modules-load[180]: Inserted module 'overlay' Jan 24 00:32:54.940764 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:32:54.943779 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:32:54.944867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:32:54.952989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:32:54.965035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:32:54.967222 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:32:54.973954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:32:54.987870 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:32:54.992537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:32:54.997088 kernel: Bridge firewalling registered Jan 24 00:32:54.996293 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 24 00:32:55.001818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:32:55.007533 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:32:55.004999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:32:55.012068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:32:55.013017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:32:55.018959 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:32:55.025998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:32:55.029976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:32:55.041131 dracut-cmdline[214]: dracut-dracut-053 Jan 24 00:32:55.047341 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:32:55.084681 systemd-resolved[217]: Positive Trust Anchors: Jan 24 00:32:55.084701 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:32:55.084783 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:32:55.093110 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 24 00:32:55.096467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:32:55.097174 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:32:55.135792 kernel: SCSI subsystem initialized Jan 24 00:32:55.145787 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:32:55.156784 kernel: iscsi: registered transport (tcp) Jan 24 00:32:55.179056 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:32:55.179143 kernel: QLogic iSCSI HBA Driver Jan 24 00:32:55.218145 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:32:55.222949 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:32:55.250093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:32:55.250176 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:32:55.252475 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:32:55.293778 kernel: raid6: avx512x4 gen() 18153 MB/s Jan 24 00:32:55.311775 kernel: raid6: avx512x2 gen() 18068 MB/s Jan 24 00:32:55.329771 kernel: raid6: avx512x1 gen() 18010 MB/s Jan 24 00:32:55.347770 kernel: raid6: avx2x4 gen() 17944 MB/s Jan 24 00:32:55.365770 kernel: raid6: avx2x2 gen() 17900 MB/s Jan 24 00:32:55.383958 kernel: raid6: avx2x1 gen() 13708 MB/s Jan 24 00:32:55.384006 kernel: raid6: using algorithm avx512x4 gen() 18153 MB/s Jan 24 00:32:55.402960 kernel: raid6: .... xor() 7751 MB/s, rmw enabled Jan 24 00:32:55.403010 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:32:55.424798 kernel: xor: automatically using best checksumming function avx Jan 24 00:32:55.586779 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:32:55.597062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:32:55.608011 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:32:55.621516 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 24 00:32:55.626691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:32:55.636932 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:32:55.655340 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 24 00:32:55.686613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:32:55.693138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:32:55.746457 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:32:55.755026 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:32:55.778087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:32:55.781269 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:32:55.783599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:32:55.784104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:32:55.790973 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:32:55.816211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:32:55.837769 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:32:55.838040 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:32:55.842775 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:32:55.847819 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:32:55.884032 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:32:55.884129 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:32:55.885743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:32:55.887495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:32:55.887579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:32:55.889388 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:32:55.901770 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:cc:0d:56:3d:db Jan 24 00:32:55.902062 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:32:55.898961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:32:55.906945 kernel: AES CTR mode by8 optimization enabled Jan 24 00:32:55.906998 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:32:55.910832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:32:55.915894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:32:55.917851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:32:55.929866 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:32:55.931142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:32:55.937988 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:32:55.938048 kernel: GPT:9289727 != 33554431 Jan 24 00:32:55.938069 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:32:55.940136 kernel: GPT:9289727 != 33554431 Jan 24 00:32:55.940186 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:32:55.942218 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:32:55.951676 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:32:55.964097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:32:55.969979 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:32:55.991037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:32:56.104776 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jan 24 00:32:56.116787 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (458) Jan 24 00:32:56.140392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:32:56.185519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:32:56.191452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:32:56.192122 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:32:56.199690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:32:56.205006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:32:56.213549 disk-uuid[634]: Primary Header is updated. Jan 24 00:32:56.213549 disk-uuid[634]: Secondary Entries is updated. Jan 24 00:32:56.213549 disk-uuid[634]: Secondary Header is updated. Jan 24 00:32:56.218774 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:32:56.224292 kernel: GPT:disk_guids don't match. Jan 24 00:32:56.224354 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:32:56.224367 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:32:56.232811 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:32:57.231051 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:32:57.231491 disk-uuid[635]: The operation has completed successfully. Jan 24 00:32:57.334617 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:32:57.334721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:32:57.358015 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:32:57.361843 sh[978]: Success Jan 24 00:32:57.382775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:32:57.477438 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:32:57.482877 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:32:57.487110 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:32:57.517736 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:32:57.517819 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:32:57.517837 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:32:57.519977 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:32:57.522112 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:32:57.617778 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:32:57.643905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:32:57.644970 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:32:57.655050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:32:57.658963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:32:57.679405 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:32:57.679465 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:32:57.679488 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:32:57.699212 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:32:57.711064 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:32:57.714717 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:32:57.720987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:32:57.731010 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:32:57.776554 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:32:57.784034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:32:57.806460 systemd-networkd[1170]: lo: Link UP Jan 24 00:32:57.806472 systemd-networkd[1170]: lo: Gained carrier Jan 24 00:32:57.808182 systemd-networkd[1170]: Enumeration completed Jan 24 00:32:57.808633 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:32:57.808638 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:32:57.809871 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:32:57.811743 systemd[1]: Reached target network.target - Network. Jan 24 00:32:57.812972 systemd-networkd[1170]: eth0: Link UP Jan 24 00:32:57.812977 systemd-networkd[1170]: eth0: Gained carrier Jan 24 00:32:57.812990 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:32:57.827860 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.18.176/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:32:58.128515 ignition[1101]: Ignition 2.19.0 Jan 24 00:32:58.128529 ignition[1101]: Stage: fetch-offline Jan 24 00:32:58.128819 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:58.130460 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:32:58.128832 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:58.129162 ignition[1101]: Ignition finished successfully Jan 24 00:32:58.136943 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:32:58.151996 ignition[1179]: Ignition 2.19.0 Jan 24 00:32:58.152010 ignition[1179]: Stage: fetch Jan 24 00:32:58.152471 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:58.152485 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:58.152604 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:58.161782 ignition[1179]: PUT result: OK Jan 24 00:32:58.163724 ignition[1179]: parsed url from cmdline: "" Jan 24 00:32:58.163732 ignition[1179]: no config URL provided Jan 24 00:32:58.163741 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:32:58.163767 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:32:58.163786 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:58.165044 ignition[1179]: PUT result: OK Jan 24 00:32:58.165084 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:32:58.165881 ignition[1179]: GET result: OK Jan 24 00:32:58.165933 ignition[1179]: parsing config with SHA512: 9c7e69b4f7d86d4633ea5c86b6cad3db18854870942704d1c3a2bf494494b1f609d0c68ab1ae1afb4fdc865548edf65674609aed9805513a9defa506008dfc64 Jan 24 00:32:58.170884 unknown[1179]: fetched base config from "system" Jan 24 00:32:58.170900 unknown[1179]: fetched base config from "system" Jan 24 00:32:58.171415 ignition[1179]: fetch: fetch complete Jan 24 00:32:58.170908 unknown[1179]: fetched user config from "aws" Jan 24 00:32:58.171422 ignition[1179]: fetch: fetch passed Jan 24 00:32:58.171491 ignition[1179]: Ignition finished successfully Jan 24 00:32:58.174027 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:32:58.182056 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:32:58.197825 ignition[1185]: Ignition 2.19.0 Jan 24 00:32:58.197841 ignition[1185]: Stage: kargs Jan 24 00:32:58.198315 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:58.198329 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:58.198449 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:58.199373 ignition[1185]: PUT result: OK Jan 24 00:32:58.201972 ignition[1185]: kargs: kargs passed Jan 24 00:32:58.202068 ignition[1185]: Ignition finished successfully Jan 24 00:32:58.203888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:32:58.207965 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:32:58.225264 ignition[1192]: Ignition 2.19.0 Jan 24 00:32:58.225278 ignition[1192]: Stage: disks Jan 24 00:32:58.225892 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:58.225907 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:58.226031 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:58.226954 ignition[1192]: PUT result: OK Jan 24 00:32:58.230263 ignition[1192]: disks: disks passed Jan 24 00:32:58.230332 ignition[1192]: Ignition finished successfully Jan 24 00:32:58.232137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:32:58.232727 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:32:58.233128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:32:58.233794 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:32:58.234344 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:32:58.234920 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:32:58.239945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:32:58.280313 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:32:58.283672 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:32:58.288882 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:32:58.387724 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:32:58.388585 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:32:58.388698 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:32:58.400912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:32:58.403880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:32:58.405907 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:32:58.406871 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:32:58.406907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:32:58.419272 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:32:58.420861 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Jan 24 00:32:58.423842 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:32:58.423892 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:32:58.426357 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:32:58.427017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:32:58.458793 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:32:58.463324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:32:58.867250 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:32:58.894411 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:32:58.899298 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:32:58.904212 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:32:59.219101 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:32:59.224907 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:32:59.231154 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:32:59.236760 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:32:59.239401 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:32:59.274255 ignition[1336]: INFO : Ignition 2.19.0 Jan 24 00:32:59.274255 ignition[1336]: INFO : Stage: mount Jan 24 00:32:59.274255 ignition[1336]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:59.274255 ignition[1336]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:59.274255 ignition[1336]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:59.279499 ignition[1336]: INFO : PUT result: OK Jan 24 00:32:59.279499 ignition[1336]: INFO : mount: mount passed Jan 24 00:32:59.279499 ignition[1336]: INFO : Ignition finished successfully Jan 24 00:32:59.279333 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:32:59.289914 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:32:59.293023 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:32:59.299700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:32:59.320781 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1348) Jan 24 00:32:59.323839 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:32:59.323904 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:32:59.326364 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:32:59.331790 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:32:59.333715 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:32:59.359161 ignition[1365]: INFO : Ignition 2.19.0 Jan 24 00:32:59.359876 ignition[1365]: INFO : Stage: files Jan 24 00:32:59.360246 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:32:59.360246 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:32:59.360911 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:32:59.361506 ignition[1365]: INFO : PUT result: OK Jan 24 00:32:59.367781 ignition[1365]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:32:59.368627 ignition[1365]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:32:59.368627 ignition[1365]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:32:59.406597 ignition[1365]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:32:59.407518 ignition[1365]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:32:59.407518 ignition[1365]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:32:59.407158 unknown[1365]: wrote ssh authorized keys file for user: core Jan 24 00:32:59.409290 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:32:59.410550 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:32:59.440949 systemd-networkd[1170]: eth0: Gained IPv6LL Jan 24 00:32:59.934506 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 24 00:33:00.609866 ignition[1365]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:33:00.611539 ignition[1365]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:33:00.612901 ignition[1365]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:33:00.612901 ignition[1365]: INFO : files: files passed Jan 24 00:33:00.612901 ignition[1365]: INFO : Ignition finished successfully Jan 24 00:33:00.613794 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:33:00.620008 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:33:00.623986 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:33:00.630116 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:33:00.630271 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:33:00.647432 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:33:00.647432 initrd-setup-root-after-ignition[1394]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:33:00.651437 initrd-setup-root-after-ignition[1398]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:33:00.651899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:33:00.653965 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:33:00.661052 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:33:00.696881 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:33:00.697029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:33:00.698510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:33:00.699610 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:33:00.700517 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:33:00.706977 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:33:00.720800 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:33:00.725986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:33:00.741391 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:33:00.742305 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:33:00.743410 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:33:00.744361 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:33:00.744556 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:33:00.745948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:33:00.746875 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:33:00.747703 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:33:00.748527 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:33:00.749416 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:33:00.750292 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:33:00.751100 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:33:00.751929 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:33:00.753147 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:33:00.754085 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:33:00.754836 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:33:00.755024 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:33:00.756148 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:33:00.756995 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:33:00.757847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:33:00.758598 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:33:00.759916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:33:00.760110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:33:00.761617 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:33:00.761842 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:33:00.762664 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:33:00.762851 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:33:00.775100 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:33:00.775815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:33:00.776033 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:33:00.786149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:33:00.788556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:33:00.788810 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:33:00.791809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:33:00.791995 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:33:00.800251 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:33:00.801209 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:33:00.805776 ignition[1418]: INFO : Ignition 2.19.0 Jan 24 00:33:00.805776 ignition[1418]: INFO : Stage: umount Jan 24 00:33:00.808283 ignition[1418]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:33:00.808283 ignition[1418]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:33:00.808283 ignition[1418]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:33:00.810566 ignition[1418]: INFO : PUT result: OK Jan 24 00:33:00.812124 ignition[1418]: INFO : umount: umount passed Jan 24 00:33:00.812660 ignition[1418]: INFO : Ignition finished successfully Jan 24 00:33:00.814525 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:33:00.814937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:33:00.815952 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:33:00.816023 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:33:00.816675 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:33:00.816741 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:33:00.817522 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:33:00.817584 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:33:00.818849 systemd[1]: Stopped target network.target - Network. Jan 24 00:33:00.819475 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:33:00.819551 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:33:00.820370 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:33:00.821010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:33:00.822821 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:33:00.823360 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:33:00.824123 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:33:00.825063 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:33:00.825124 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:33:00.825870 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:33:00.825927 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:33:00.827572 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:33:00.827659 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:33:00.828301 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:33:00.828364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:33:00.829179 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:33:00.830038 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:33:00.832850 systemd-networkd[1170]: eth0: DHCPv6 lease lost Jan 24 00:33:00.833439 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:33:00.833594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:33:00.836419 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:33:00.836617 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:33:00.840584 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:33:00.841931 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:33:00.842010 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:33:00.850912 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:33:00.851543 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:33:00.851629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:33:00.852559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:33:00.852625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:33:00.854107 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:33:00.854176 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:33:00.855259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:33:00.855321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:33:00.856107 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:33:00.858053 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:33:00.858182 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:33:00.866150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:33:00.866281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:33:00.877065 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:33:00.877268 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:33:00.880196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:33:00.880262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:33:00.881425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:33:00.881497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:33:00.882455 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:33:00.882527 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:33:00.883924 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:33:00.883997 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:33:00.885108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:33:00.885175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:33:00.894058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:33:00.894729 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:33:00.894848 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:33:00.895636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:33:00.895705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:33:00.898279 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:33:00.898401 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:33:00.903938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:33:00.904074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:33:00.905625 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:33:00.910016 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:33:00.921182 systemd[1]: Switching root. Jan 24 00:33:00.961767 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 24 00:33:00.961841 systemd-journald[179]: Journal stopped Jan 24 00:33:03.265732 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:33:03.265866 kernel: SELinux: policy capability open_perms=1 Jan 24 00:33:03.265889 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:33:03.265913 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:33:03.265934 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:33:03.265959 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:33:03.265984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:33:03.266004 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:33:03.266023 kernel: audit: type=1403 audit(1769214781.890:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:33:03.266052 systemd[1]: Successfully loaded SELinux policy in 49.082ms. Jan 24 00:33:03.266081 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.786ms. Jan 24 00:33:03.266113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:33:03.266137 systemd[1]: Detected virtualization amazon. Jan 24 00:33:03.266157 systemd[1]: Detected architecture x86-64. Jan 24 00:33:03.266179 systemd[1]: Detected first boot. Jan 24 00:33:03.266198 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:33:03.266217 zram_generator::config[1460]: No configuration found. Jan 24 00:33:03.266240 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:33:03.266261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:33:03.266280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:33:03.266305 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:33:03.266326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:33:03.266346 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:33:03.266366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:33:03.266386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:33:03.266408 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:33:03.266432 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:33:03.266452 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:33:03.266471 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:33:03.266491 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:33:03.266511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:33:03.266530 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:33:03.266549 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:33:03.266569 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:33:03.266592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:33:03.266612 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:33:03.266632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:33:03.266651 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:33:03.266671 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:33:03.266690 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:33:03.266710 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:33:03.266731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:33:03.267867 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:33:03.267903 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:33:03.267925 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:33:03.267947 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:33:03.267970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:33:03.267992 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:33:03.268013 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:33:03.268035 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:33:03.268057 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:33:03.268085 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:33:03.268106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:33:03.268128 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:33:03.268150 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:03.268172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:33:03.268193 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:33:03.268214 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:33:03.268236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:33:03.268257 systemd[1]: Reached target machines.target - Containers. Jan 24 00:33:03.268281 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:33:03.268303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:33:03.268324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:33:03.268345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:33:03.268367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:33:03.268388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:33:03.268409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:33:03.268431 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:33:03.268456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:33:03.268476 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:33:03.268509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:33:03.268550 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:33:03.268593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:33:03.268635 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:33:03.268658 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:33:03.268678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:33:03.268698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:33:03.268723 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:33:03.268743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:33:03.268921 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:33:03.268944 systemd[1]: Stopped verity-setup.service. Jan 24 00:33:03.268964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:03.268983 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:33:03.269003 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:33:03.269024 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:33:03.269043 kernel: loop: module loaded Jan 24 00:33:03.269071 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:33:03.269092 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:33:03.269112 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:33:03.269133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:33:03.269218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:33:03.269292 systemd-journald[1538]: Collecting audit messages is disabled. Jan 24 00:33:03.269334 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:33:03.269355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:33:03.269375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:33:03.269397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:33:03.269417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:33:03.269445 systemd-journald[1538]: Journal started Jan 24 00:33:03.269487 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec24d8c75dbd28308fb42f611dcd421b) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:33:02.876450 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:33:02.924116 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:33:02.924550 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:33:03.273319 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:33:03.276276 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:33:03.277191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:33:03.281773 kernel: fuse: init (API version 7.39) Jan 24 00:33:03.280003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:33:03.281057 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:33:03.283075 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:33:03.285230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:33:03.313171 kernel: ACPI: bus type drm_connector registered Jan 24 00:33:03.313602 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:33:03.314012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:33:03.315404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:33:03.315741 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:33:03.318661 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:33:03.329238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:33:03.341977 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:33:03.344104 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:33:03.344836 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:33:03.348174 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:33:03.362475 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:33:03.365208 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:33:03.366436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:33:03.368701 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:33:03.381977 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:33:03.382666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:33:03.385565 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:33:03.386634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:33:03.391924 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:33:03.402994 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:33:03.417975 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec24d8c75dbd28308fb42f611dcd421b is 113.242ms for 964 entries. Jan 24 00:33:03.417975 systemd-journald[1538]: System Journal (/var/log/journal/ec24d8c75dbd28308fb42f611dcd421b) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:33:03.551226 systemd-journald[1538]: Received client request to flush runtime journal. Jan 24 00:33:03.551290 kernel: loop0: detected capacity change from 0 to 229808 Jan 24 00:33:03.409988 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:33:03.414464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:33:03.416715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:33:03.417526 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:33:03.422145 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:33:03.438986 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:33:03.441270 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:33:03.444171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:33:03.460992 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:33:03.504986 udevadm[1596]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:33:03.510246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:33:03.558209 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:33:03.588491 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:33:03.592237 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:33:03.610784 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:33:03.614190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:33:03.627646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:33:03.653902 kernel: loop1: detected capacity change from 0 to 61336 Jan 24 00:33:03.682706 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jan 24 00:33:03.682736 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jan 24 00:33:03.696186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:33:03.784477 kernel: loop2: detected capacity change from 0 to 140768 Jan 24 00:33:03.960829 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:33:04.510801 kernel: loop4: detected capacity change from 0 to 229808 Jan 24 00:33:04.548170 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:33:04.555864 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:33:04.583983 systemd-udevd[1616]: Using default interface naming scheme 'v255'. Jan 24 00:33:04.614780 kernel: loop5: detected capacity change from 0 to 61336 Jan 24 00:33:04.679835 kernel: loop6: detected capacity change from 0 to 140768 Jan 24 00:33:04.716688 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:33:04.727715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:33:04.780647 kernel: loop7: detected capacity change from 0 to 142488 Jan 24 00:33:04.793990 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:33:04.794916 (udev-worker)[1628]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:04.800944 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:33:04.892610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:33:04.912784 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:33:04.923151 (sd-merge)[1614]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:33:04.925043 (sd-merge)[1614]: Merged extensions into '/usr'. Jan 24 00:33:04.947025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:33:04.948948 systemd[1]: Reloading requested from client PID 1589 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:33:04.948967 systemd[1]: Reloading... Jan 24 00:33:04.953781 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:33:04.953867 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 24 00:33:04.956781 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:33:04.981897 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 24 00:33:05.080773 zram_generator::config[1673]: No configuration found. Jan 24 00:33:05.149636 systemd-networkd[1621]: lo: Link UP Jan 24 00:33:05.149650 systemd-networkd[1621]: lo: Gained carrier Jan 24 00:33:05.153910 systemd-networkd[1621]: Enumeration completed Jan 24 00:33:05.154406 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:33:05.154423 systemd-networkd[1621]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:33:05.158662 systemd-networkd[1621]: eth0: Link UP Jan 24 00:33:05.158900 systemd-networkd[1621]: eth0: Gained carrier Jan 24 00:33:05.158926 systemd-networkd[1621]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:33:05.161783 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:33:05.177841 systemd-networkd[1621]: eth0: DHCPv4 address 172.31.18.176/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:33:05.239773 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1618) Jan 24 00:33:05.430022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:33:05.513188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:33:05.514338 systemd[1]: Reloading finished in 564 ms. Jan 24 00:33:05.550008 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:33:05.551047 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:33:05.565345 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:33:05.591045 systemd[1]: Starting ensure-sysext.service... Jan 24 00:33:05.593842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:33:05.614435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:33:05.616969 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:33:05.622013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:33:05.631085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:33:05.645931 systemd[1]: Reloading requested from client PID 1811 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:33:05.646267 systemd[1]: Reloading... Jan 24 00:33:05.679456 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:33:05.743122 systemd-tmpfiles[1815]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:33:05.746381 systemd-tmpfiles[1815]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:33:05.750891 systemd-tmpfiles[1815]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:33:05.753013 systemd-tmpfiles[1815]: ACLs are not supported, ignoring. Jan 24 00:33:05.753114 systemd-tmpfiles[1815]: ACLs are not supported, ignoring. Jan 24 00:33:05.760941 zram_generator::config[1850]: No configuration found. Jan 24 00:33:05.762248 systemd-tmpfiles[1815]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:33:05.762530 systemd-tmpfiles[1815]: Skipping /boot Jan 24 00:33:05.787649 systemd-tmpfiles[1815]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:33:05.789935 systemd-tmpfiles[1815]: Skipping /boot Jan 24 00:33:05.836795 ldconfig[1584]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:33:05.934060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:33:06.010994 systemd[1]: Reloading finished in 364 ms. Jan 24 00:33:06.032987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:33:06.039359 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:33:06.040423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:33:06.041720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:33:06.043154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:33:06.053220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:33:06.060091 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:33:06.065123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:33:06.077186 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:33:06.084002 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:33:06.091287 lvm[1912]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:33:06.093791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:33:06.105655 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:33:06.113858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.114189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:33:06.124203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:33:06.136917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:33:06.147648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:33:06.149257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:33:06.150279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.157390 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:33:06.164328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:33:06.169871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:33:06.183568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.186464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:33:06.196072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:33:06.196817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:33:06.197011 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.199016 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:33:06.201014 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:33:06.203313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:33:06.203955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:33:06.205886 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:33:06.206071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:33:06.212347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:33:06.226156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:33:06.232734 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.234814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:33:06.236378 augenrules[1941]: No rules Jan 24 00:33:06.246155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:33:06.262147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:33:06.270129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:33:06.270926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:33:06.271219 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:33:06.271943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:33:06.274139 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:33:06.276113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:33:06.276308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:33:06.278780 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:33:06.280327 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:33:06.280647 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:33:06.282183 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:33:06.282486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:33:06.285866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:33:06.286081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:33:06.286483 systemd-resolved[1919]: Positive Trust Anchors: Jan 24 00:33:06.286493 systemd-resolved[1919]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:33:06.286549 systemd-resolved[1919]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:33:06.292014 systemd[1]: Finished ensure-sysext.service. Jan 24 00:33:06.298188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:33:06.298277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:33:06.305873 systemd-resolved[1919]: Defaulting to hostname 'linux'. Jan 24 00:33:06.307867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:33:06.308413 systemd[1]: Reached target network.target - Network. Jan 24 00:33:06.308790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:33:06.310506 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:33:06.311095 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:33:06.311135 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:33:06.311577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:33:06.311984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:33:06.312470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:33:06.313018 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:33:06.313437 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:33:06.313744 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:33:06.313784 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:33:06.314069 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:33:06.315536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:33:06.317502 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:33:06.325075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:33:06.326456 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:33:06.327089 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:33:06.327504 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:33:06.327940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:33:06.327980 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:33:06.329192 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:33:06.333967 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:33:06.340874 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:33:06.348658 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:33:06.352952 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:33:06.354838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:33:06.362193 jq[1962]: false Jan 24 00:33:06.371088 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:33:06.381101 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:33:06.385909 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:33:06.401679 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:33:06.404928 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:33:06.416106 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:33:06.419516 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:33:06.421007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:33:06.423917 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:33:06.436174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:33:06.447259 jq[1975]: true Jan 24 00:33:06.448987 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:33:06.449436 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:33:06.453485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:33:06.454151 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:33:06.480920 jq[1981]: true Jan 24 00:33:06.533146 update_engine[1973]: I20260124 00:33:06.532115 1973 main.cc:92] Flatcar Update Engine starting Jan 24 00:33:06.533051 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:33:06.534966 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:33:06.539543 extend-filesystems[1963]: Found loop4 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found loop5 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found loop6 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found loop7 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p1 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p2 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p3 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found usr Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p4 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p6 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p7 Jan 24 00:33:06.543886 extend-filesystems[1963]: Found nvme0n1p9 Jan 24 00:33:06.543886 extend-filesystems[1963]: Checking size of /dev/nvme0n1p9 Jan 24 00:33:06.543402 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:33:06.561429 dbus-daemon[1961]: [system] SELinux support is enabled Jan 24 00:33:06.561732 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:33:06.571101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:33:06.571152 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:33:06.571690 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:33:06.571727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:33:06.573008 dbus-daemon[1961]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1621 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:33:06.577356 dbus-daemon[1961]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:33:06.582139 update_engine[1973]: I20260124 00:33:06.582072 1973 update_check_scheduler.cc:74] Next update check in 3m19s Jan 24 00:33:06.584534 (ntainerd)[1999]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:33:06.586971 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:33:06.589605 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: ---------------------------------------------------- Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: corporation. Support and training for ntp-4 are Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: available at https://www.nwtime.org/support Jan 24 00:33:06.591110 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: ---------------------------------------------------- Jan 24 00:33:06.587002 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:33:06.590464 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:33:06.587013 ntpd[1965]: ---------------------------------------------------- Jan 24 00:33:06.587023 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:33:06.587034 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:33:06.587044 ntpd[1965]: corporation. Support and training for ntp-4 are Jan 24 00:33:06.587055 ntpd[1965]: available at https://www.nwtime.org/support Jan 24 00:33:06.587064 ntpd[1965]: ---------------------------------------------------- Jan 24 00:33:06.597000 ntpd[1965]: proto: precision = 0.100 usec (-23) Jan 24 00:33:06.602976 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:33:06.604974 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: proto: precision = 0.100 usec (-23) Jan 24 00:33:06.604974 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: basedate set to 2026-01-11 Jan 24 00:33:06.604974 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: gps base set to 2026-01-11 (week 2401) Jan 24 00:33:06.597981 ntpd[1965]: basedate set to 2026-01-11 Jan 24 00:33:06.598002 ntpd[1965]: gps base set to 2026-01-11 (week 2401) Jan 24 00:33:06.608622 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listen normally on 3 eth0 172.31.18.176:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listen normally on 4 lo [::1]:123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: bind(21) AF_INET6 fe80::4cc:dff:fe56:3ddb%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: unable to create socket on eth0 (5) for fe80::4cc:dff:fe56:3ddb%2#123 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: failed to init interface for address fe80::4cc:dff:fe56:3ddb%2 Jan 24 00:33:06.610790 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Jan 24 00:33:06.608691 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:33:06.608937 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:33:06.608974 ntpd[1965]: Listen normally on 3 eth0 172.31.18.176:123 Jan 24 00:33:06.609016 ntpd[1965]: Listen normally on 4 lo [::1]:123 Jan 24 00:33:06.609060 ntpd[1965]: bind(21) AF_INET6 fe80::4cc:dff:fe56:3ddb%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:33:06.609084 ntpd[1965]: unable to create socket on eth0 (5) for fe80::4cc:dff:fe56:3ddb%2#123 Jan 24 00:33:06.609099 ntpd[1965]: failed to init interface for address fe80::4cc:dff:fe56:3ddb%2 Jan 24 00:33:06.609132 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Jan 24 00:33:06.613629 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:33:06.618846 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:33:06.618846 ntpd[1965]: 24 Jan 00:33:06 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:33:06.613669 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:33:06.638894 extend-filesystems[1963]: Resized partition /dev/nvme0n1p9 Jan 24 00:33:06.651788 extend-filesystems[2024]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:33:06.664131 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:33:06.662410 systemd-logind[1972]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:33:06.662436 systemd-logind[1972]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 24 00:33:06.662460 systemd-logind[1972]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:33:06.674339 systemd-logind[1972]: New seat seat0. Jan 24 00:33:06.675907 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:33:06.687540 coreos-metadata[1960]: Jan 24 00:33:06.686 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:33:06.691575 coreos-metadata[1960]: Jan 24 00:33:06.690 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:33:06.692122 coreos-metadata[1960]: Jan 24 00:33:06.691 INFO Fetch successful Jan 24 00:33:06.692122 coreos-metadata[1960]: Jan 24 00:33:06.692 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.692 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.692 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.693 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.696 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.696 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.697 INFO Fetch failed with 404: resource not found Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.698 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.698 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.699 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.699 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.700 INFO Fetch successful Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.700 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:33:06.749689 coreos-metadata[1960]: Jan 24 00:33:06.701 INFO Fetch successful Jan 24 00:33:07.267854 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1623) Jan 24 00:33:06.858874 dbus-daemon[1961]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:33:06.795799 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:33:06.860437 dbus-daemon[1961]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:33:06.802411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:33:06.963663 polkitd[2042]: Started polkitd version 121 Jan 24 00:33:06.859067 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:33:07.270706 amazon-ssm-agent[2050]: Initializing new seelog logger Jan 24 00:33:07.270706 amazon-ssm-agent[2050]: New Seelog Logger Creation Complete Jan 24 00:33:07.270706 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.270706 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.270706 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 processing appconfig overrides Jan 24 00:33:07.019941 polkitd[2042]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:33:06.864904 systemd-networkd[1621]: eth0: Gained IPv6LL Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 processing appconfig overrides Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO Proxy environment variables: Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 processing appconfig overrides Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:33:07.311534 amazon-ssm-agent[2050]: 2026/01/24 00:33:07 processing appconfig overrides Jan 24 00:33:07.020029 polkitd[2042]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:33:06.874108 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:33:07.023804 polkitd[2042]: Finished loading, compiling and executing 2 rules Jan 24 00:33:06.876033 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:33:07.028007 dbus-daemon[1961]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:33:06.881237 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:33:07.028895 polkitd[2042]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:33:06.889072 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:33:06.908439 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:33:06.931433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:33:06.941107 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:33:07.029870 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:33:07.130978 systemd-resolved[1919]: System hostname changed to 'ip-172-31-18-176'. Jan 24 00:33:07.131101 systemd-hostnamed[2010]: Hostname set to (transient) Jan 24 00:33:07.308176 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:33:07.369399 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:33:07.371454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:33:07.379376 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO no_proxy: Jan 24 00:33:07.382702 systemd[1]: Starting sshkeys.service... Jan 24 00:33:07.403780 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:33:07.434907 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:33:07.445866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:33:07.479775 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO https_proxy: Jan 24 00:33:07.485774 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:33:07.485774 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:33:07.485774 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:33:07.493344 extend-filesystems[1963]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:33:07.487317 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:33:07.488186 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:33:07.576779 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO http_proxy: Jan 24 00:33:07.648940 coreos-metadata[2165]: Jan 24 00:33:07.648 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:33:07.651800 coreos-metadata[2165]: Jan 24 00:33:07.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:33:07.652600 coreos-metadata[2165]: Jan 24 00:33:07.652 INFO Fetch successful Jan 24 00:33:07.652600 coreos-metadata[2165]: Jan 24 00:33:07.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:33:07.653432 coreos-metadata[2165]: Jan 24 00:33:07.653 INFO Fetch successful Jan 24 00:33:07.656965 unknown[2165]: wrote ssh authorized keys file for user: core Jan 24 00:33:07.662808 containerd[1999]: time="2026-01-24T00:33:07.662059717Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:33:07.676067 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:33:07.708238 update-ssh-keys[2174]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:33:07.710249 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:33:07.715239 systemd[1]: Finished sshkeys.service. Jan 24 00:33:07.773555 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:33:07.780851 containerd[1999]: time="2026-01-24T00:33:07.779662727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.784969 containerd[1999]: time="2026-01-24T00:33:07.784683551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:33:07.784969 containerd[1999]: time="2026-01-24T00:33:07.784741000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:33:07.784969 containerd[1999]: time="2026-01-24T00:33:07.784784672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:33:07.785152 containerd[1999]: time="2026-01-24T00:33:07.784980997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:33:07.785152 containerd[1999]: time="2026-01-24T00:33:07.785005854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.785152 containerd[1999]: time="2026-01-24T00:33:07.785072493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:33:07.785152 containerd[1999]: time="2026-01-24T00:33:07.785089758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.786960452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.786995259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.787019962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.787035902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.787172745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.787942 containerd[1999]: time="2026-01-24T00:33:07.787419751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:33:07.789352 containerd[1999]: time="2026-01-24T00:33:07.789316635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:33:07.789420 containerd[1999]: time="2026-01-24T00:33:07.789353201Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:33:07.790670 containerd[1999]: time="2026-01-24T00:33:07.789487012Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:33:07.790670 containerd[1999]: time="2026-01-24T00:33:07.789554308Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:33:07.795166 containerd[1999]: time="2026-01-24T00:33:07.795126911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:33:07.796806 containerd[1999]: time="2026-01-24T00:33:07.796781305Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:33:07.796879 containerd[1999]: time="2026-01-24T00:33:07.796862738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:33:07.796920 containerd[1999]: time="2026-01-24T00:33:07.796891591Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:33:07.796954 containerd[1999]: time="2026-01-24T00:33:07.796917167Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:33:07.797120 containerd[1999]: time="2026-01-24T00:33:07.797098327Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:33:07.797592 containerd[1999]: time="2026-01-24T00:33:07.797569341Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:33:07.797741 containerd[1999]: time="2026-01-24T00:33:07.797720714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:33:07.797806 containerd[1999]: time="2026-01-24T00:33:07.797764821Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:33:07.797806 containerd[1999]: time="2026-01-24T00:33:07.797788357Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:33:07.797893 containerd[1999]: time="2026-01-24T00:33:07.797811313Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.797893 containerd[1999]: time="2026-01-24T00:33:07.797831720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.797893 containerd[1999]: time="2026-01-24T00:33:07.797851259Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.797893 containerd[1999]: time="2026-01-24T00:33:07.797872372Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.797894865Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.797915227Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.797946356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.797965267Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.797994488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798029 containerd[1999]: time="2026-01-24T00:33:07.798015055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798034106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798054807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798073309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798093832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798112152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798131250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798151128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798173099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798193243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798212396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798243 containerd[1999]: time="2026-01-24T00:33:07.798231081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798622 containerd[1999]: time="2026-01-24T00:33:07.798254154Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:33:07.798622 containerd[1999]: time="2026-01-24T00:33:07.798297327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798622 containerd[1999]: time="2026-01-24T00:33:07.798316481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.798622 containerd[1999]: time="2026-01-24T00:33:07.798335917Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799273371Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799314338Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799333146Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799351383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799366943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799386433Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799407330Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:33:07.800788 containerd[1999]: time="2026-01-24T00:33:07.799422573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:33:07.801136 containerd[1999]: time="2026-01-24T00:33:07.799862019Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:33:07.801136 containerd[1999]: time="2026-01-24T00:33:07.799949977Z" level=info msg="Connect containerd service" Jan 24 00:33:07.801136 containerd[1999]: time="2026-01-24T00:33:07.800002096Z" level=info msg="using legacy CRI server" Jan 24 00:33:07.801136 containerd[1999]: time="2026-01-24T00:33:07.800011643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:33:07.801136 containerd[1999]: time="2026-01-24T00:33:07.800132360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:33:07.806888 containerd[1999]: time="2026-01-24T00:33:07.806791189Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:33:07.809899 containerd[1999]: time="2026-01-24T00:33:07.809207998Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:33:07.810097 containerd[1999]: time="2026-01-24T00:33:07.810076251Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:33:07.810559 containerd[1999]: time="2026-01-24T00:33:07.809525048Z" level=info msg="Start subscribing containerd event" Jan 24 00:33:07.813957 containerd[1999]: time="2026-01-24T00:33:07.813926225Z" level=info msg="Start recovering state" Jan 24 00:33:07.817912 containerd[1999]: time="2026-01-24T00:33:07.817879781Z" level=info msg="Start event monitor" Jan 24 00:33:07.819705 containerd[1999]: time="2026-01-24T00:33:07.818020395Z" level=info msg="Start snapshots syncer" Jan 24 00:33:07.819705 containerd[1999]: time="2026-01-24T00:33:07.818038827Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:33:07.819705 containerd[1999]: time="2026-01-24T00:33:07.818051433Z" level=info msg="Start streaming server" Jan 24 00:33:07.819705 containerd[1999]: time="2026-01-24T00:33:07.818976856Z" level=info msg="containerd successfully booted in 0.163607s" Jan 24 00:33:07.818257 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:33:07.872444 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO Agent will take identity from EC2 Jan 24 00:33:07.941113 sshd_keygen[2012]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:33:07.973684 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:33:07.977694 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:33:07.987074 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:33:08.009975 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:33:08.010210 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:33:08.021895 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:33:08.045903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:33:08.056232 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:33:08.066178 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:33:08.067443 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:33:08.072513 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [Registrar] Starting registrar module Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:07 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:08 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:08 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:08 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:33:08.142216 amazon-ssm-agent[2050]: 2026-01-24 00:33:08 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:33:08.171840 amazon-ssm-agent[2050]: 2026-01-24 00:33:08 INFO [CredentialRefresher] Next credential rotation will be in 30.3499937204 minutes Jan 24 00:33:09.118010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:33:09.124666 systemd[1]: Started sshd@0-172.31.18.176:22-4.153.228.146:45070.service - OpenSSH per-connection server daemon (4.153.228.146:45070). Jan 24 00:33:09.168133 amazon-ssm-agent[2050]: 2026-01-24 00:33:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:33:09.268351 amazon-ssm-agent[2050]: 2026-01-24 00:33:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2201) started Jan 24 00:33:09.369147 amazon-ssm-agent[2050]: 2026-01-24 00:33:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:33:09.587466 ntpd[1965]: Listen normally on 6 eth0 [fe80::4cc:dff:fe56:3ddb%2]:123 Jan 24 00:33:09.587897 ntpd[1965]: 24 Jan 00:33:09 ntpd[1965]: Listen normally on 6 eth0 [fe80::4cc:dff:fe56:3ddb%2]:123 Jan 24 00:33:09.637406 sshd[2198]: Accepted publickey for core from 4.153.228.146 port 45070 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:09.640805 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:09.652427 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:33:09.670199 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:33:09.673812 systemd-logind[1972]: New session 1 of user core. Jan 24 00:33:09.685013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:33:09.697980 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:33:09.722891 (systemd)[2214]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:33:09.853409 systemd[2214]: Queued start job for default target default.target. Jan 24 00:33:09.860881 systemd[2214]: Created slice app.slice - User Application Slice. Jan 24 00:33:09.860918 systemd[2214]: Reached target paths.target - Paths. Jan 24 00:33:09.860933 systemd[2214]: Reached target timers.target - Timers. Jan 24 00:33:09.862363 systemd[2214]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:33:09.884884 systemd[2214]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:33:09.885015 systemd[2214]: Reached target sockets.target - Sockets. Jan 24 00:33:09.885031 systemd[2214]: Reached target basic.target - Basic System. Jan 24 00:33:09.885072 systemd[2214]: Reached target default.target - Main User Target. Jan 24 00:33:09.885103 systemd[2214]: Startup finished in 154ms. Jan 24 00:33:09.887049 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:33:09.891869 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:33:09.896105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:33:09.900453 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:33:09.901426 systemd[1]: Startup finished in 584ms (kernel) + 7.188s (initrd) + 8.058s (userspace) = 15.831s. Jan 24 00:33:09.909082 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:33:10.269141 systemd[1]: Started sshd@1-172.31.18.176:22-4.153.228.146:45084.service - OpenSSH per-connection server daemon (4.153.228.146:45084). Jan 24 00:33:10.748382 sshd[2239]: Accepted publickey for core from 4.153.228.146 port 45084 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:10.749060 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:10.753811 systemd-logind[1972]: New session 2 of user core. Jan 24 00:33:10.758971 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:33:11.027035 kubelet[2226]: E0124 00:33:11.026907 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:33:11.030221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:33:11.030423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:33:11.030997 systemd[1]: kubelet.service: Consumed 1.081s CPU time. Jan 24 00:33:11.099197 sshd[2239]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:11.102666 systemd[1]: sshd@1-172.31.18.176:22-4.153.228.146:45084.service: Deactivated successfully. Jan 24 00:33:11.104607 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:33:11.106335 systemd-logind[1972]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:33:11.107612 systemd-logind[1972]: Removed session 2. Jan 24 00:33:11.183645 systemd[1]: Started sshd@2-172.31.18.176:22-4.153.228.146:45090.service - OpenSSH per-connection server daemon (4.153.228.146:45090). Jan 24 00:33:11.664140 sshd[2249]: Accepted publickey for core from 4.153.228.146 port 45090 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:11.665727 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:11.670411 systemd-logind[1972]: New session 3 of user core. Jan 24 00:33:11.676015 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:33:12.007319 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:12.010108 systemd[1]: sshd@2-172.31.18.176:22-4.153.228.146:45090.service: Deactivated successfully. Jan 24 00:33:12.011670 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:33:12.012824 systemd-logind[1972]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:33:12.013979 systemd-logind[1972]: Removed session 3. Jan 24 00:33:12.094450 systemd[1]: Started sshd@3-172.31.18.176:22-4.153.228.146:45092.service - OpenSSH per-connection server daemon (4.153.228.146:45092). Jan 24 00:33:12.581255 sshd[2256]: Accepted publickey for core from 4.153.228.146 port 45092 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:12.582800 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:12.588532 systemd-logind[1972]: New session 4 of user core. Jan 24 00:33:12.592958 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:33:12.934359 sshd[2256]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:12.937303 systemd[1]: sshd@3-172.31.18.176:22-4.153.228.146:45092.service: Deactivated successfully. Jan 24 00:33:12.938877 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:33:12.940135 systemd-logind[1972]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:33:12.941056 systemd-logind[1972]: Removed session 4. Jan 24 00:33:13.019673 systemd[1]: Started sshd@4-172.31.18.176:22-4.153.228.146:59642.service - OpenSSH per-connection server daemon (4.153.228.146:59642). Jan 24 00:33:13.504522 sshd[2263]: Accepted publickey for core from 4.153.228.146 port 59642 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:13.506119 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:13.510425 systemd-logind[1972]: New session 5 of user core. Jan 24 00:33:13.521011 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:33:15.150423 systemd-resolved[1919]: Clock change detected. Flushing caches. Jan 24 00:33:15.377325 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:33:15.377628 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:33:15.388755 sudo[2266]: pam_unix(sudo:session): session closed for user root Jan 24 00:33:15.465799 sshd[2263]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:15.469479 systemd[1]: sshd@4-172.31.18.176:22-4.153.228.146:59642.service: Deactivated successfully. Jan 24 00:33:15.471051 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:33:15.472006 systemd-logind[1972]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:33:15.473045 systemd-logind[1972]: Removed session 5. Jan 24 00:33:15.563256 systemd[1]: Started sshd@5-172.31.18.176:22-4.153.228.146:59654.service - OpenSSH per-connection server daemon (4.153.228.146:59654). Jan 24 00:33:16.088855 sshd[2271]: Accepted publickey for core from 4.153.228.146 port 59654 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:16.090353 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:16.095642 systemd-logind[1972]: New session 6 of user core. Jan 24 00:33:16.101447 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:33:16.381650 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:33:16.381943 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:33:16.385808 sudo[2275]: pam_unix(sudo:session): session closed for user root Jan 24 00:33:16.391512 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:33:16.391812 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:33:16.405496 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:33:16.408430 auditctl[2278]: No rules Jan 24 00:33:16.408785 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:33:16.408980 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:33:16.414562 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:33:16.443408 augenrules[2296]: No rules Jan 24 00:33:16.444910 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:33:16.446801 sudo[2274]: pam_unix(sudo:session): session closed for user root Jan 24 00:33:16.529670 sshd[2271]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:16.532614 systemd[1]: sshd@5-172.31.18.176:22-4.153.228.146:59654.service: Deactivated successfully. Jan 24 00:33:16.534278 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:33:16.535798 systemd-logind[1972]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:33:16.536901 systemd-logind[1972]: Removed session 6. Jan 24 00:33:16.610092 systemd[1]: Started sshd@6-172.31.18.176:22-4.153.228.146:59662.service - OpenSSH per-connection server daemon (4.153.228.146:59662). Jan 24 00:33:17.100185 sshd[2304]: Accepted publickey for core from 4.153.228.146 port 59662 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:17.101681 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:17.107467 systemd-logind[1972]: New session 7 of user core. Jan 24 00:33:17.113481 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:33:17.377140 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:33:17.377459 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:33:18.437701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:33:18.437952 systemd[1]: kubelet.service: Consumed 1.081s CPU time. Jan 24 00:33:18.451269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:33:18.493832 systemd[1]: Reloading requested from client PID 2342 ('systemctl') (unit session-7.scope)... Jan 24 00:33:18.493853 systemd[1]: Reloading... Jan 24 00:33:18.641251 zram_generator::config[2382]: No configuration found. Jan 24 00:33:18.777371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:33:18.863640 systemd[1]: Reloading finished in 369 ms. Jan 24 00:33:18.907185 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:33:18.907265 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:33:18.907663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:33:18.912576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:33:19.185398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:33:19.194724 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:33:19.238648 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:33:19.238648 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:33:19.238648 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:33:19.239003 kubelet[2445]: I0124 00:33:19.238695 2445 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:33:19.598257 kubelet[2445]: I0124 00:33:19.597136 2445 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:33:19.598257 kubelet[2445]: I0124 00:33:19.597180 2445 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:33:19.598257 kubelet[2445]: I0124 00:33:19.597606 2445 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:33:19.653737 kubelet[2445]: I0124 00:33:19.653708 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:33:19.666002 kubelet[2445]: E0124 00:33:19.665932 2445 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:33:19.666002 kubelet[2445]: I0124 00:33:19.665989 2445 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:33:19.669535 kubelet[2445]: I0124 00:33:19.669298 2445 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:33:19.669535 kubelet[2445]: I0124 00:33:19.669533 2445 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:33:19.669779 kubelet[2445]: I0124 00:33:19.669559 2445 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.18.176","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:33:19.669885 kubelet[2445]: I0124 00:33:19.669783 2445 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:33:19.669885 kubelet[2445]: I0124 00:33:19.669794 2445 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:33:19.669931 kubelet[2445]: I0124 00:33:19.669922 2445 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:33:19.673514 kubelet[2445]: I0124 00:33:19.673458 2445 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:33:19.673514 kubelet[2445]: I0124 00:33:19.673494 2445 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:33:19.674356 kubelet[2445]: I0124 00:33:19.674329 2445 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:33:19.676684 kubelet[2445]: I0124 00:33:19.676645 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:33:19.678197 kubelet[2445]: E0124 00:33:19.677928 2445 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:19.678197 kubelet[2445]: E0124 00:33:19.677969 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:19.681659 kubelet[2445]: I0124 00:33:19.681637 2445 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:33:19.682373 kubelet[2445]: I0124 00:33:19.682340 2445 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:33:19.683450 kubelet[2445]: W0124 00:33:19.683413 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:33:19.688034 kubelet[2445]: I0124 00:33:19.687994 2445 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:33:19.688126 kubelet[2445]: I0124 00:33:19.688056 2445 server.go:1289] "Started kubelet" Jan 24 00:33:19.688279 kubelet[2445]: I0124 00:33:19.688230 2445 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:33:19.689492 kubelet[2445]: I0124 00:33:19.689262 2445 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:33:19.695188 kubelet[2445]: I0124 00:33:19.693795 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:33:19.695188 kubelet[2445]: I0124 00:33:19.694086 2445 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:33:19.695188 kubelet[2445]: E0124 00:33:19.694434 2445 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.18.176\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:33:19.695188 kubelet[2445]: E0124 00:33:19.694550 2445 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:33:19.695188 kubelet[2445]: I0124 00:33:19.694824 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:33:19.695882 kubelet[2445]: I0124 00:33:19.695856 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:33:19.697288 kubelet[2445]: I0124 00:33:19.697244 2445 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:33:19.698612 kubelet[2445]: E0124 00:33:19.697578 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:19.698612 kubelet[2445]: I0124 00:33:19.698227 2445 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:33:19.698612 kubelet[2445]: I0124 00:33:19.698296 2445 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:33:19.701533 kubelet[2445]: E0124 00:33:19.699835 2445 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.176.188d837c92c4b38a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.176,UID:172.31.18.176,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.18.176,},FirstTimestamp:2026-01-24 00:33:19.688020874 +0000 UTC m=+0.487264925,LastTimestamp:2026-01-24 00:33:19.688020874 +0000 UTC m=+0.487264925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.176,}" Jan 24 00:33:19.703864 kubelet[2445]: I0124 00:33:19.703455 2445 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:33:19.704564 kubelet[2445]: I0124 00:33:19.704262 2445 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:33:19.713533 kubelet[2445]: I0124 00:33:19.713416 2445 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:33:19.713533 kubelet[2445]: E0124 00:33:19.713428 2445 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:33:19.723062 kubelet[2445]: E0124 00:33:19.723011 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.18.176\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 24 00:33:19.723192 kubelet[2445]: E0124 00:33:19.723116 2445 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:33:19.732727 kubelet[2445]: I0124 00:33:19.732697 2445 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:33:19.732727 kubelet[2445]: I0124 00:33:19.732718 2445 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:33:19.732727 kubelet[2445]: I0124 00:33:19.732734 2445 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:33:19.734735 kubelet[2445]: I0124 00:33:19.734702 2445 policy_none.go:49] "None policy: Start" Jan 24 00:33:19.734735 kubelet[2445]: I0124 00:33:19.734723 2445 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:33:19.734735 kubelet[2445]: I0124 00:33:19.734736 2445 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:33:19.742298 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:33:19.757845 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:33:19.763914 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:33:19.767457 kubelet[2445]: I0124 00:33:19.767421 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:33:19.771196 kubelet[2445]: E0124 00:33:19.771153 2445 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:33:19.773590 kubelet[2445]: I0124 00:33:19.773570 2445 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:33:19.775675 kubelet[2445]: I0124 00:33:19.775315 2445 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:33:19.776183 kubelet[2445]: I0124 00:33:19.776161 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:33:19.780325 kubelet[2445]: E0124 00:33:19.780297 2445 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:33:19.780674 kubelet[2445]: E0124 00:33:19.780662 2445 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.176\" not found" Jan 24 00:33:19.833537 kubelet[2445]: I0124 00:33:19.833503 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:33:19.833537 kubelet[2445]: I0124 00:33:19.833536 2445 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:33:19.833661 kubelet[2445]: I0124 00:33:19.833559 2445 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:33:19.833661 kubelet[2445]: I0124 00:33:19.833566 2445 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:33:19.833707 kubelet[2445]: E0124 00:33:19.833660 2445 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:33:19.880376 kubelet[2445]: I0124 00:33:19.880339 2445 kubelet_node_status.go:75] "Attempting to register node" node="172.31.18.176" Jan 24 00:33:19.884921 kubelet[2445]: I0124 00:33:19.884892 2445 kubelet_node_status.go:78] "Successfully registered node" node="172.31.18.176" Jan 24 00:33:19.884921 kubelet[2445]: E0124 00:33:19.884921 2445 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.18.176\": node \"172.31.18.176\" not found" Jan 24 00:33:19.905146 kubelet[2445]: E0124 00:33:19.905117 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.006037 kubelet[2445]: E0124 00:33:20.005996 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.106674 kubelet[2445]: E0124 00:33:20.106625 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.206897 kubelet[2445]: E0124 00:33:20.206769 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.307684 kubelet[2445]: E0124 00:33:20.307494 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.408292 kubelet[2445]: E0124 00:33:20.408216 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.509111 kubelet[2445]: E0124 00:33:20.508903 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.601555 kubelet[2445]: I0124 00:33:20.601511 2445 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:33:20.601722 kubelet[2445]: I0124 00:33:20.601698 2445 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:33:20.609964 kubelet[2445]: E0124 00:33:20.609917 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.678473 kubelet[2445]: E0124 00:33:20.678424 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:20.710156 kubelet[2445]: E0124 00:33:20.710109 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.726017 sudo[2307]: pam_unix(sudo:session): session closed for user root Jan 24 00:33:20.804328 sshd[2304]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:20.807970 systemd[1]: sshd@6-172.31.18.176:22-4.153.228.146:59662.service: Deactivated successfully. Jan 24 00:33:20.809644 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:33:20.810485 kubelet[2445]: E0124 00:33:20.810394 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:20.810633 systemd-logind[1972]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:33:20.812045 systemd-logind[1972]: Removed session 7. Jan 24 00:33:20.911408 kubelet[2445]: E0124 00:33:20.911353 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:21.012272 kubelet[2445]: E0124 00:33:21.012231 2445 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.176\" not found" Jan 24 00:33:21.114103 kubelet[2445]: I0124 00:33:21.113982 2445 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 24 00:33:21.114866 containerd[1999]: time="2026-01-24T00:33:21.114824502Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:33:21.116980 kubelet[2445]: I0124 00:33:21.115321 2445 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 24 00:33:21.679620 kubelet[2445]: I0124 00:33:21.679271 2445 apiserver.go:52] "Watching apiserver" Jan 24 00:33:21.679620 kubelet[2445]: E0124 00:33:21.679326 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:21.690828 systemd[1]: Created slice kubepods-besteffort-podc4465f6d_6d17_4719_8044_2ffbfb489065.slice - libcontainer container kubepods-besteffort-podc4465f6d_6d17_4719_8044_2ffbfb489065.slice. Jan 24 00:33:21.693727 kubelet[2445]: E0124 00:33:21.693322 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:21.699366 kubelet[2445]: I0124 00:33:21.699337 2445 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:33:21.706073 systemd[1]: Created slice kubepods-besteffort-pode7bbbde4_434e_4c33_8c87_df6b5703c0b5.slice - libcontainer container kubepods-besteffort-pode7bbbde4_434e_4c33_8c87_df6b5703c0b5.slice. Jan 24 00:33:21.711917 kubelet[2445]: I0124 00:33:21.711884 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7bbbde4-434e-4c33-8c87-df6b5703c0b5-kube-proxy\") pod \"kube-proxy-p2kph\" (UID: \"e7bbbde4-434e-4c33-8c87-df6b5703c0b5\") " pod="kube-system/kube-proxy-p2kph" Jan 24 00:33:21.711917 kubelet[2445]: I0124 00:33:21.711921 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4465f6d-6d17-4719-8044-2ffbfb489065-tigera-ca-bundle\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712327 kubelet[2445]: I0124 00:33:21.711938 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-var-run-calico\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712327 kubelet[2445]: I0124 00:33:21.711955 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc56aca3-e78d-4c4c-9e51-d34a825d2bbf-kubelet-dir\") pod \"csi-node-driver-bqx9m\" (UID: \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\") " pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:21.712327 kubelet[2445]: I0124 00:33:21.711969 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dc56aca3-e78d-4c4c-9e51-d34a825d2bbf-socket-dir\") pod \"csi-node-driver-bqx9m\" (UID: \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\") " pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:21.712327 kubelet[2445]: I0124 00:33:21.711984 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dc56aca3-e78d-4c4c-9e51-d34a825d2bbf-varrun\") pod \"csi-node-driver-bqx9m\" (UID: \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\") " pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:21.712327 kubelet[2445]: I0124 00:33:21.712000 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dc56aca3-e78d-4c4c-9e51-d34a825d2bbf-registration-dir\") pod \"csi-node-driver-bqx9m\" (UID: \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\") " pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:21.712470 kubelet[2445]: I0124 00:33:21.712024 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-cni-net-dir\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712470 kubelet[2445]: I0124 00:33:21.712039 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-lib-modules\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712470 kubelet[2445]: I0124 00:33:21.712056 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-xtables-lock\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712470 kubelet[2445]: I0124 00:33:21.712078 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4465f6d-6d17-4719-8044-2ffbfb489065-node-certs\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712470 kubelet[2445]: I0124 00:33:21.712097 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtphm\" (UniqueName: \"kubernetes.io/projected/dc56aca3-e78d-4c4c-9e51-d34a825d2bbf-kube-api-access-qtphm\") pod \"csi-node-driver-bqx9m\" (UID: \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\") " pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:21.712610 kubelet[2445]: I0124 00:33:21.712111 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7bbbde4-434e-4c33-8c87-df6b5703c0b5-xtables-lock\") pod \"kube-proxy-p2kph\" (UID: \"e7bbbde4-434e-4c33-8c87-df6b5703c0b5\") " pod="kube-system/kube-proxy-p2kph" Jan 24 00:33:21.712610 kubelet[2445]: I0124 00:33:21.712124 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7bbbde4-434e-4c33-8c87-df6b5703c0b5-lib-modules\") pod \"kube-proxy-p2kph\" (UID: \"e7bbbde4-434e-4c33-8c87-df6b5703c0b5\") " pod="kube-system/kube-proxy-p2kph" Jan 24 00:33:21.712610 kubelet[2445]: I0124 00:33:21.712139 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8dc\" (UniqueName: \"kubernetes.io/projected/e7bbbde4-434e-4c33-8c87-df6b5703c0b5-kube-api-access-kc8dc\") pod \"kube-proxy-p2kph\" (UID: \"e7bbbde4-434e-4c33-8c87-df6b5703c0b5\") " pod="kube-system/kube-proxy-p2kph" Jan 24 00:33:21.712610 kubelet[2445]: I0124 00:33:21.712153 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkbp\" (UniqueName: \"kubernetes.io/projected/c4465f6d-6d17-4719-8044-2ffbfb489065-kube-api-access-xpkbp\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712610 kubelet[2445]: I0124 00:33:21.712187 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-cni-bin-dir\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712733 kubelet[2445]: I0124 00:33:21.712202 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-cni-log-dir\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712733 kubelet[2445]: I0124 00:33:21.712216 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-flexvol-driver-host\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712733 kubelet[2445]: I0124 00:33:21.712231 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-policysync\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.712733 kubelet[2445]: I0124 00:33:21.712247 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4465f6d-6d17-4719-8044-2ffbfb489065-var-lib-calico\") pod \"calico-node-s88k7\" (UID: \"c4465f6d-6d17-4719-8044-2ffbfb489065\") " pod="calico-system/calico-node-s88k7" Jan 24 00:33:21.815147 kubelet[2445]: E0124 00:33:21.814647 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.815147 kubelet[2445]: W0124 00:33:21.814682 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.815147 kubelet[2445]: E0124 00:33:21.814719 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.815147 kubelet[2445]: E0124 00:33:21.814966 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.815147 kubelet[2445]: W0124 00:33:21.814987 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.815147 kubelet[2445]: E0124 00:33:21.814997 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.815637 kubelet[2445]: E0124 00:33:21.815466 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.815637 kubelet[2445]: W0124 00:33:21.815477 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.815637 kubelet[2445]: E0124 00:33:21.815503 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.816060 kubelet[2445]: E0124 00:33:21.815990 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.816060 kubelet[2445]: W0124 00:33:21.816001 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.816060 kubelet[2445]: E0124 00:33:21.816010 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.818582 kubelet[2445]: E0124 00:33:21.818472 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.818582 kubelet[2445]: W0124 00:33:21.818486 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.818582 kubelet[2445]: E0124 00:33:21.818498 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.819068 kubelet[2445]: E0124 00:33:21.818900 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.819068 kubelet[2445]: W0124 00:33:21.818910 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.819068 kubelet[2445]: E0124 00:33:21.818932 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.819356 kubelet[2445]: E0124 00:33:21.819250 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.819356 kubelet[2445]: W0124 00:33:21.819260 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.819356 kubelet[2445]: E0124 00:33:21.819269 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.820720 kubelet[2445]: E0124 00:33:21.820619 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.820720 kubelet[2445]: W0124 00:33:21.820631 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.820720 kubelet[2445]: E0124 00:33:21.820643 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.821012 kubelet[2445]: E0124 00:33:21.820961 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.821012 kubelet[2445]: W0124 00:33:21.820970 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.821012 kubelet[2445]: E0124 00:33:21.820980 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.821543 kubelet[2445]: E0124 00:33:21.821487 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.821658 kubelet[2445]: W0124 00:33:21.821590 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.821828 kubelet[2445]: E0124 00:33:21.821603 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.822101 kubelet[2445]: E0124 00:33:21.822055 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.822101 kubelet[2445]: W0124 00:33:21.822066 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.822101 kubelet[2445]: E0124 00:33:21.822085 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.822422 kubelet[2445]: E0124 00:33:21.822413 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.822492 kubelet[2445]: W0124 00:33:21.822483 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.822609 kubelet[2445]: E0124 00:33:21.822533 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.822771 kubelet[2445]: E0124 00:33:21.822763 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.822824 kubelet[2445]: W0124 00:33:21.822816 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.822942 kubelet[2445]: E0124 00:33:21.822870 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.823106 kubelet[2445]: E0124 00:33:21.823098 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.823225 kubelet[2445]: W0124 00:33:21.823156 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.823388 kubelet[2445]: E0124 00:33:21.823216 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.823844 kubelet[2445]: E0124 00:33:21.823832 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.823924 kubelet[2445]: W0124 00:33:21.823914 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.826680 kubelet[2445]: E0124 00:33:21.826526 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.826939 kubelet[2445]: E0124 00:33:21.826928 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.828190 kubelet[2445]: W0124 00:33:21.827074 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.828190 kubelet[2445]: E0124 00:33:21.827100 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.828276 kubelet[2445]: E0124 00:33:21.828217 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.828276 kubelet[2445]: W0124 00:33:21.828227 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.828276 kubelet[2445]: E0124 00:33:21.828239 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.828660 kubelet[2445]: E0124 00:33:21.828466 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.828660 kubelet[2445]: W0124 00:33:21.828476 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.828660 kubelet[2445]: E0124 00:33:21.828484 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.831322 kubelet[2445]: E0124 00:33:21.831295 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.831322 kubelet[2445]: W0124 00:33:21.831313 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.831442 kubelet[2445]: E0124 00:33:21.831328 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.831584 kubelet[2445]: E0124 00:33:21.831569 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.831625 kubelet[2445]: W0124 00:33:21.831586 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.831625 kubelet[2445]: E0124 00:33:21.831598 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.840296 kubelet[2445]: E0124 00:33:21.840266 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.840296 kubelet[2445]: W0124 00:33:21.840289 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.840545 kubelet[2445]: E0124 00:33:21.840314 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.842364 kubelet[2445]: E0124 00:33:21.841543 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.842364 kubelet[2445]: W0124 00:33:21.841559 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.842364 kubelet[2445]: E0124 00:33:21.841577 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.842954 kubelet[2445]: E0124 00:33:21.842822 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.842954 kubelet[2445]: W0124 00:33:21.842837 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.842954 kubelet[2445]: E0124 00:33:21.842853 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.843282 kubelet[2445]: E0124 00:33:21.843255 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.843282 kubelet[2445]: W0124 00:33:21.843270 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.843665 kubelet[2445]: E0124 00:33:21.843287 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.843775 kubelet[2445]: E0124 00:33:21.843739 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.843775 kubelet[2445]: W0124 00:33:21.843750 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.843775 kubelet[2445]: E0124 00:33:21.843764 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.844205 kubelet[2445]: E0124 00:33:21.844148 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.844205 kubelet[2445]: W0124 00:33:21.844163 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.844205 kubelet[2445]: E0124 00:33:21.844199 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.844475 kubelet[2445]: E0124 00:33:21.844459 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.844475 kubelet[2445]: W0124 00:33:21.844474 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.844595 kubelet[2445]: E0124 00:33:21.844487 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.844763 kubelet[2445]: E0124 00:33:21.844746 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.844833 kubelet[2445]: W0124 00:33:21.844771 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.844833 kubelet[2445]: E0124 00:33:21.844785 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.845062 kubelet[2445]: E0124 00:33:21.845045 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.845062 kubelet[2445]: W0124 00:33:21.845059 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.845156 kubelet[2445]: E0124 00:33:21.845090 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.845385 kubelet[2445]: E0124 00:33:21.845368 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.845447 kubelet[2445]: W0124 00:33:21.845390 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.845447 kubelet[2445]: E0124 00:33:21.845421 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.845749 kubelet[2445]: E0124 00:33:21.845720 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.845749 kubelet[2445]: W0124 00:33:21.845742 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.845858 kubelet[2445]: E0124 00:33:21.845754 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.846421 kubelet[2445]: E0124 00:33:21.846253 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.846421 kubelet[2445]: W0124 00:33:21.846271 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.846421 kubelet[2445]: E0124 00:33:21.846287 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.846638 kubelet[2445]: E0124 00:33:21.846608 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.846638 kubelet[2445]: W0124 00:33:21.846620 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.846638 kubelet[2445]: E0124 00:33:21.846633 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:21.846934 kubelet[2445]: E0124 00:33:21.846879 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:21.846934 kubelet[2445]: W0124 00:33:21.846890 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:21.846934 kubelet[2445]: E0124 00:33:21.846902 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:22.004524 containerd[1999]: time="2026-01-24T00:33:22.004400506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s88k7,Uid:c4465f6d-6d17-4719-8044-2ffbfb489065,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:22.010379 containerd[1999]: time="2026-01-24T00:33:22.010342263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2kph,Uid:e7bbbde4-434e-4c33-8c87-df6b5703c0b5,Namespace:kube-system,Attempt:0,}" Jan 24 00:33:22.574026 containerd[1999]: time="2026-01-24T00:33:22.573971765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:33:22.579198 containerd[1999]: time="2026-01-24T00:33:22.579114364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:33:22.581854 containerd[1999]: time="2026-01-24T00:33:22.581807287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:33:22.584097 containerd[1999]: time="2026-01-24T00:33:22.584053018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:33:22.585712 containerd[1999]: time="2026-01-24T00:33:22.585635147Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:33:22.589202 containerd[1999]: time="2026-01-24T00:33:22.588822989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:33:22.591206 containerd[1999]: time="2026-01-24T00:33:22.589723989Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.23325ms" Jan 24 00:33:22.592000 containerd[1999]: time="2026-01-24T00:33:22.591958753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.550727ms" Jan 24 00:33:22.679766 kubelet[2445]: E0124 00:33:22.679684 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:22.819914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774410152.mount: Deactivated successfully. Jan 24 00:33:22.859481 containerd[1999]: time="2026-01-24T00:33:22.858626334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:22.859481 containerd[1999]: time="2026-01-24T00:33:22.858700438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:22.859481 containerd[1999]: time="2026-01-24T00:33:22.858740703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:22.860178 containerd[1999]: time="2026-01-24T00:33:22.859910171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:22.860900 containerd[1999]: time="2026-01-24T00:33:22.860585251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:22.860900 containerd[1999]: time="2026-01-24T00:33:22.860647637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:22.860900 containerd[1999]: time="2026-01-24T00:33:22.860671927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:22.860900 containerd[1999]: time="2026-01-24T00:33:22.860782191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:23.016600 systemd[1]: Started cri-containerd-bd055bafa6cc707b41c849034fad5e11a6433cd0278331b2d793a579c192beaa.scope - libcontainer container bd055bafa6cc707b41c849034fad5e11a6433cd0278331b2d793a579c192beaa. Jan 24 00:33:23.045469 systemd[1]: Started cri-containerd-0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f.scope - libcontainer container 0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f. Jan 24 00:33:23.095819 containerd[1999]: time="2026-01-24T00:33:23.095774862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p2kph,Uid:e7bbbde4-434e-4c33-8c87-df6b5703c0b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd055bafa6cc707b41c849034fad5e11a6433cd0278331b2d793a579c192beaa\"" Jan 24 00:33:23.103024 containerd[1999]: time="2026-01-24T00:33:23.102977801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:33:23.112251 containerd[1999]: time="2026-01-24T00:33:23.112108985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s88k7,Uid:c4465f6d-6d17-4719-8044-2ffbfb489065,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\"" Jan 24 00:33:23.680199 kubelet[2445]: E0124 00:33:23.680140 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:23.835212 kubelet[2445]: E0124 00:33:23.834937 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:24.192907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321944063.mount: Deactivated successfully. Jan 24 00:33:24.680430 kubelet[2445]: E0124 00:33:24.680394 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:24.901679 containerd[1999]: time="2026-01-24T00:33:24.901624707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:24.919303 containerd[1999]: time="2026-01-24T00:33:24.919257903Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 24 00:33:24.947972 containerd[1999]: time="2026-01-24T00:33:24.947842744Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:24.996569 containerd[1999]: time="2026-01-24T00:33:24.996514325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:24.997353 containerd[1999]: time="2026-01-24T00:33:24.997303237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.89387094s" Jan 24 00:33:24.997353 containerd[1999]: time="2026-01-24T00:33:24.997341021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:33:24.999685 containerd[1999]: time="2026-01-24T00:33:24.999451889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:33:25.034010 containerd[1999]: time="2026-01-24T00:33:25.033970762Z" level=info msg="CreateContainer within sandbox \"bd055bafa6cc707b41c849034fad5e11a6433cd0278331b2d793a579c192beaa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:33:25.166799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283460456.mount: Deactivated successfully. Jan 24 00:33:25.169877 containerd[1999]: time="2026-01-24T00:33:25.169830974Z" level=info msg="CreateContainer within sandbox \"bd055bafa6cc707b41c849034fad5e11a6433cd0278331b2d793a579c192beaa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94487e6016442c15d024ad3fc1cde4f16cbb03a67234e9db828abdde3c8e860f\"" Jan 24 00:33:25.172112 containerd[1999]: time="2026-01-24T00:33:25.170564665Z" level=info msg="StartContainer for \"94487e6016442c15d024ad3fc1cde4f16cbb03a67234e9db828abdde3c8e860f\"" Jan 24 00:33:25.215386 systemd[1]: Started cri-containerd-94487e6016442c15d024ad3fc1cde4f16cbb03a67234e9db828abdde3c8e860f.scope - libcontainer container 94487e6016442c15d024ad3fc1cde4f16cbb03a67234e9db828abdde3c8e860f. Jan 24 00:33:25.249314 containerd[1999]: time="2026-01-24T00:33:25.249265329Z" level=info msg="StartContainer for \"94487e6016442c15d024ad3fc1cde4f16cbb03a67234e9db828abdde3c8e860f\" returns successfully" Jan 24 00:33:25.680840 kubelet[2445]: E0124 00:33:25.680785 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:25.836777 kubelet[2445]: E0124 00:33:25.834849 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:25.866800 kubelet[2445]: I0124 00:33:25.866721 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p2kph" podStartSLOduration=4.969472028 podStartE2EDuration="6.866705266s" podCreationTimestamp="2026-01-24 00:33:19 +0000 UTC" firstStartedPulling="2026-01-24 00:33:23.1014209 +0000 UTC m=+3.900664940" lastFinishedPulling="2026-01-24 00:33:24.998654107 +0000 UTC m=+5.797898178" observedRunningTime="2026-01-24 00:33:25.866071085 +0000 UTC m=+6.665315158" watchObservedRunningTime="2026-01-24 00:33:25.866705266 +0000 UTC m=+6.665949325" Jan 24 00:33:25.929485 kubelet[2445]: E0124 00:33:25.929428 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.929485 kubelet[2445]: W0124 00:33:25.929467 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.929485 kubelet[2445]: E0124 00:33:25.929489 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.929726 kubelet[2445]: E0124 00:33:25.929712 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.929726 kubelet[2445]: W0124 00:33:25.929723 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.929815 kubelet[2445]: E0124 00:33:25.929731 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.929899 kubelet[2445]: E0124 00:33:25.929886 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.929899 kubelet[2445]: W0124 00:33:25.929896 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.929982 kubelet[2445]: E0124 00:33:25.929904 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.930164 kubelet[2445]: E0124 00:33:25.930149 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.930330 kubelet[2445]: W0124 00:33:25.930251 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.930330 kubelet[2445]: E0124 00:33:25.930269 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.930624 kubelet[2445]: E0124 00:33:25.930606 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.930624 kubelet[2445]: W0124 00:33:25.930620 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.930694 kubelet[2445]: E0124 00:33:25.930632 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.930827 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.931309 kubelet[2445]: W0124 00:33:25.930837 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.930845 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.930996 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.931309 kubelet[2445]: W0124 00:33:25.931002 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.931009 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.931160 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.931309 kubelet[2445]: W0124 00:33:25.931189 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.931309 kubelet[2445]: E0124 00:33:25.931197 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931618 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932652 kubelet[2445]: W0124 00:33:25.931628 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931638 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931801 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932652 kubelet[2445]: W0124 00:33:25.931816 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931824 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931965 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932652 kubelet[2445]: W0124 00:33:25.931970 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.931976 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932652 kubelet[2445]: E0124 00:33:25.932561 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932982 kubelet[2445]: W0124 00:33:25.932570 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932982 kubelet[2445]: E0124 00:33:25.932579 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932982 kubelet[2445]: E0124 00:33:25.932765 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932982 kubelet[2445]: W0124 00:33:25.932771 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932982 kubelet[2445]: E0124 00:33:25.932779 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.932982 kubelet[2445]: E0124 00:33:25.932929 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.932982 kubelet[2445]: W0124 00:33:25.932934 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.932982 kubelet[2445]: E0124 00:33:25.932941 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.933252 kubelet[2445]: E0124 00:33:25.933114 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.933252 kubelet[2445]: W0124 00:33:25.933121 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.933252 kubelet[2445]: E0124 00:33:25.933127 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.933677 kubelet[2445]: E0124 00:33:25.933645 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.933677 kubelet[2445]: W0124 00:33:25.933667 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.933779 kubelet[2445]: E0124 00:33:25.933680 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.935036 kubelet[2445]: E0124 00:33:25.934999 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.935036 kubelet[2445]: W0124 00:33:25.935027 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.935036 kubelet[2445]: E0124 00:33:25.935040 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.935326 kubelet[2445]: E0124 00:33:25.935281 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.935326 kubelet[2445]: W0124 00:33:25.935321 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.935450 kubelet[2445]: E0124 00:33:25.935335 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.937203 kubelet[2445]: E0124 00:33:25.935714 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.937203 kubelet[2445]: W0124 00:33:25.935727 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.937203 kubelet[2445]: E0124 00:33:25.935738 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.937203 kubelet[2445]: E0124 00:33:25.935948 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.937203 kubelet[2445]: W0124 00:33:25.935966 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.937203 kubelet[2445]: E0124 00:33:25.935985 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.941928 kubelet[2445]: E0124 00:33:25.941899 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.941928 kubelet[2445]: W0124 00:33:25.941919 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.942068 kubelet[2445]: E0124 00:33:25.941937 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.942253 kubelet[2445]: E0124 00:33:25.942231 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.942253 kubelet[2445]: W0124 00:33:25.942246 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.942755 kubelet[2445]: E0124 00:33:25.942260 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.942755 kubelet[2445]: E0124 00:33:25.942576 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.942755 kubelet[2445]: W0124 00:33:25.942585 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.942755 kubelet[2445]: E0124 00:33:25.942595 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.943129 kubelet[2445]: E0124 00:33:25.943111 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.943129 kubelet[2445]: W0124 00:33:25.943125 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.943219 kubelet[2445]: E0124 00:33:25.943135 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.943400 kubelet[2445]: E0124 00:33:25.943375 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.943400 kubelet[2445]: W0124 00:33:25.943391 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.943584 kubelet[2445]: E0124 00:33:25.943403 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.943705 kubelet[2445]: E0124 00:33:25.943689 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.943705 kubelet[2445]: W0124 00:33:25.943702 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.943766 kubelet[2445]: E0124 00:33:25.943712 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.943908 kubelet[2445]: E0124 00:33:25.943895 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.943908 kubelet[2445]: W0124 00:33:25.943905 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.943980 kubelet[2445]: E0124 00:33:25.943913 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.944108 kubelet[2445]: E0124 00:33:25.944088 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.944108 kubelet[2445]: W0124 00:33:25.944103 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.944206 kubelet[2445]: E0124 00:33:25.944117 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.944386 kubelet[2445]: E0124 00:33:25.944369 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.944386 kubelet[2445]: W0124 00:33:25.944381 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.944461 kubelet[2445]: E0124 00:33:25.944395 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.944785 kubelet[2445]: E0124 00:33:25.944771 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.944785 kubelet[2445]: W0124 00:33:25.944782 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.944845 kubelet[2445]: E0124 00:33:25.944792 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.945017 kubelet[2445]: E0124 00:33:25.944998 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.945017 kubelet[2445]: W0124 00:33:25.945011 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.945098 kubelet[2445]: E0124 00:33:25.945022 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:25.945388 kubelet[2445]: E0124 00:33:25.945373 2445 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:33:25.945388 kubelet[2445]: W0124 00:33:25.945385 2445 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:33:25.945456 kubelet[2445]: E0124 00:33:25.945393 2445 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:33:26.168325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109229488.mount: Deactivated successfully. Jan 24 00:33:26.256388 containerd[1999]: time="2026-01-24T00:33:26.256061879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:26.258881 containerd[1999]: time="2026-01-24T00:33:26.258729661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:33:26.260198 containerd[1999]: time="2026-01-24T00:33:26.259996291Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:26.265783 containerd[1999]: time="2026-01-24T00:33:26.264851442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:26.265783 containerd[1999]: time="2026-01-24T00:33:26.265729886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.26549083s" Jan 24 00:33:26.266078 containerd[1999]: time="2026-01-24T00:33:26.266052101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:33:26.272532 containerd[1999]: time="2026-01-24T00:33:26.272488770Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:33:26.291139 containerd[1999]: time="2026-01-24T00:33:26.291057621Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031\"" Jan 24 00:33:26.292196 containerd[1999]: time="2026-01-24T00:33:26.291934482Z" level=info msg="StartContainer for \"a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031\"" Jan 24 00:33:26.328431 systemd[1]: Started cri-containerd-a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031.scope - libcontainer container a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031. Jan 24 00:33:26.360643 containerd[1999]: time="2026-01-24T00:33:26.360478456Z" level=info msg="StartContainer for \"a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031\" returns successfully" Jan 24 00:33:26.371812 systemd[1]: cri-containerd-a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031.scope: Deactivated successfully. Jan 24 00:33:26.397817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031-rootfs.mount: Deactivated successfully. Jan 24 00:33:26.681938 kubelet[2445]: E0124 00:33:26.681902 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:26.756376 containerd[1999]: time="2026-01-24T00:33:26.756306803Z" level=info msg="shim disconnected" id=a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031 namespace=k8s.io Jan 24 00:33:26.756376 containerd[1999]: time="2026-01-24T00:33:26.756356078Z" level=warning msg="cleaning up after shim disconnected" id=a716ae2874916fd18f54d31ec85a4bc591ef4bcc2e4607ecc50037e069f0d031 namespace=k8s.io Jan 24 00:33:26.756376 containerd[1999]: time="2026-01-24T00:33:26.756364767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:33:26.858887 containerd[1999]: time="2026-01-24T00:33:26.858848956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:33:27.683095 kubelet[2445]: E0124 00:33:27.683055 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:27.835705 kubelet[2445]: E0124 00:33:27.834765 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:28.683489 kubelet[2445]: E0124 00:33:28.683415 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:29.684306 kubelet[2445]: E0124 00:33:29.684253 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:29.713638 containerd[1999]: time="2026-01-24T00:33:29.713588429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:29.715328 containerd[1999]: time="2026-01-24T00:33:29.715190511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:33:29.717715 containerd[1999]: time="2026-01-24T00:33:29.716860845Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:29.719898 containerd[1999]: time="2026-01-24T00:33:29.719866836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:29.720512 containerd[1999]: time="2026-01-24T00:33:29.720478792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.861590923s" Jan 24 00:33:29.720582 containerd[1999]: time="2026-01-24T00:33:29.720516832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:33:29.724954 containerd[1999]: time="2026-01-24T00:33:29.724921512Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:33:29.745903 containerd[1999]: time="2026-01-24T00:33:29.745859106Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7\"" Jan 24 00:33:29.746625 containerd[1999]: time="2026-01-24T00:33:29.746588009Z" level=info msg="StartContainer for \"0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7\"" Jan 24 00:33:29.780385 systemd[1]: Started cri-containerd-0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7.scope - libcontainer container 0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7. Jan 24 00:33:29.820957 containerd[1999]: time="2026-01-24T00:33:29.820905479Z" level=info msg="StartContainer for \"0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7\" returns successfully" Jan 24 00:33:29.838487 kubelet[2445]: E0124 00:33:29.837046 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:30.685422 kubelet[2445]: E0124 00:33:30.685268 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:31.103287 containerd[1999]: time="2026-01-24T00:33:31.103074531Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:33:31.105765 systemd[1]: cri-containerd-0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7.scope: Deactivated successfully. Jan 24 00:33:31.131733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7-rootfs.mount: Deactivated successfully. Jan 24 00:33:31.183334 kubelet[2445]: I0124 00:33:31.183306 2445 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:33:31.538940 containerd[1999]: time="2026-01-24T00:33:31.538878660Z" level=info msg="shim disconnected" id=0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7 namespace=k8s.io Jan 24 00:33:31.538940 containerd[1999]: time="2026-01-24T00:33:31.538931993Z" level=warning msg="cleaning up after shim disconnected" id=0d704e19c977e9257aa9f870719bede73bf67f6d7673120e551f6d8b445efab7 namespace=k8s.io Jan 24 00:33:31.539285 containerd[1999]: time="2026-01-24T00:33:31.538958508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:33:31.553669 containerd[1999]: time="2026-01-24T00:33:31.553617921Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:33:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:33:31.686373 kubelet[2445]: E0124 00:33:31.686258 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:31.840564 systemd[1]: Created slice kubepods-besteffort-poddc56aca3_e78d_4c4c_9e51_d34a825d2bbf.slice - libcontainer container kubepods-besteffort-poddc56aca3_e78d_4c4c_9e51_d34a825d2bbf.slice. Jan 24 00:33:31.843581 containerd[1999]: time="2026-01-24T00:33:31.843533632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bqx9m,Uid:dc56aca3-e78d-4c4c-9e51-d34a825d2bbf,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:31.879700 containerd[1999]: time="2026-01-24T00:33:31.879589552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:33:31.921639 containerd[1999]: time="2026-01-24T00:33:31.921578067Z" level=error msg="Failed to destroy network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:31.922108 containerd[1999]: time="2026-01-24T00:33:31.921948305Z" level=error msg="encountered an error cleaning up failed sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:31.922108 containerd[1999]: time="2026-01-24T00:33:31.922022622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bqx9m,Uid:dc56aca3-e78d-4c4c-9e51-d34a825d2bbf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:31.924422 kubelet[2445]: E0124 00:33:31.922272 2445 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:31.924422 kubelet[2445]: E0124 00:33:31.922359 2445 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:31.924422 kubelet[2445]: E0124 00:33:31.922390 2445 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bqx9m" Jan 24 00:33:31.924123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86-shm.mount: Deactivated successfully. Jan 24 00:33:31.924678 kubelet[2445]: E0124 00:33:31.922456 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:32.687359 kubelet[2445]: E0124 00:33:32.687307 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:32.882815 kubelet[2445]: I0124 00:33:32.881928 2445 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:32.883506 containerd[1999]: time="2026-01-24T00:33:32.883414850Z" level=info msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" Jan 24 00:33:32.885625 containerd[1999]: time="2026-01-24T00:33:32.884946904Z" level=info msg="Ensure that sandbox 296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86 in task-service has been cleanup successfully" Jan 24 00:33:32.942141 containerd[1999]: time="2026-01-24T00:33:32.942014118Z" level=error msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" failed" error="failed to destroy network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:32.943135 kubelet[2445]: E0124 00:33:32.942726 2445 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:32.943135 kubelet[2445]: E0124 00:33:32.942799 2445 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86"} Jan 24 00:33:32.943135 kubelet[2445]: E0124 00:33:32.942868 2445 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:32.943135 kubelet[2445]: E0124 00:33:32.942965 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:33.687854 kubelet[2445]: E0124 00:33:33.687794 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:34.688732 kubelet[2445]: E0124 00:33:34.688673 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:35.689915 kubelet[2445]: E0124 00:33:35.689808 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:36.337717 systemd[1]: Created slice kubepods-besteffort-pode455284b_9286_4ee0_9ecc_254a7e2e56a0.slice - libcontainer container kubepods-besteffort-pode455284b_9286_4ee0_9ecc_254a7e2e56a0.slice. Jan 24 00:33:36.414703 kubelet[2445]: I0124 00:33:36.414297 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhg9\" (UniqueName: \"kubernetes.io/projected/e455284b-9286-4ee0-9ecc-254a7e2e56a0-kube-api-access-jjhg9\") pod \"nginx-deployment-7fcdb87857-bx8b5\" (UID: \"e455284b-9286-4ee0-9ecc-254a7e2e56a0\") " pod="default/nginx-deployment-7fcdb87857-bx8b5" Jan 24 00:33:36.643215 containerd[1999]: time="2026-01-24T00:33:36.643064188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bx8b5,Uid:e455284b-9286-4ee0-9ecc-254a7e2e56a0,Namespace:default,Attempt:0,}" Jan 24 00:33:36.690691 kubelet[2445]: E0124 00:33:36.690325 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:37.341420 containerd[1999]: time="2026-01-24T00:33:37.341356642Z" level=error msg="Failed to destroy network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:37.344884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9-shm.mount: Deactivated successfully. Jan 24 00:33:37.347374 containerd[1999]: time="2026-01-24T00:33:37.346628632Z" level=error msg="encountered an error cleaning up failed sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:37.347374 containerd[1999]: time="2026-01-24T00:33:37.346719059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bx8b5,Uid:e455284b-9286-4ee0-9ecc-254a7e2e56a0,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:37.347584 kubelet[2445]: E0124 00:33:37.347084 2445 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:37.347584 kubelet[2445]: E0124 00:33:37.347159 2445 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bx8b5" Jan 24 00:33:37.347584 kubelet[2445]: E0124 00:33:37.347227 2445 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-bx8b5" Jan 24 00:33:37.347882 kubelet[2445]: E0124 00:33:37.347311 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-bx8b5_default(e455284b-9286-4ee0-9ecc-254a7e2e56a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-bx8b5_default(e455284b-9286-4ee0-9ecc-254a7e2e56a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bx8b5" podUID="e455284b-9286-4ee0-9ecc-254a7e2e56a0" Jan 24 00:33:37.690707 kubelet[2445]: E0124 00:33:37.690558 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:37.866653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050818400.mount: Deactivated successfully. Jan 24 00:33:37.892701 kubelet[2445]: I0124 00:33:37.892669 2445 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:37.893445 containerd[1999]: time="2026-01-24T00:33:37.893408092Z" level=info msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" Jan 24 00:33:37.893884 containerd[1999]: time="2026-01-24T00:33:37.893663846Z" level=info msg="Ensure that sandbox 469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9 in task-service has been cleanup successfully" Jan 24 00:33:37.926303 containerd[1999]: time="2026-01-24T00:33:37.926222481Z" level=error msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" failed" error="failed to destroy network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:37.926564 kubelet[2445]: E0124 00:33:37.926458 2445 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:37.926564 kubelet[2445]: E0124 00:33:37.926505 2445 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9"} Jan 24 00:33:37.926564 kubelet[2445]: E0124 00:33:37.926540 2445 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e455284b-9286-4ee0-9ecc-254a7e2e56a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:37.926715 kubelet[2445]: E0124 00:33:37.926562 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e455284b-9286-4ee0-9ecc-254a7e2e56a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-bx8b5" podUID="e455284b-9286-4ee0-9ecc-254a7e2e56a0" Jan 24 00:33:37.984805 containerd[1999]: time="2026-01-24T00:33:37.984675136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:37.993788 containerd[1999]: time="2026-01-24T00:33:37.993653743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:33:38.005256 containerd[1999]: time="2026-01-24T00:33:38.005160259Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:38.018056 containerd[1999]: time="2026-01-24T00:33:38.017922400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:38.020267 containerd[1999]: time="2026-01-24T00:33:38.020219914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.14058094s" Jan 24 00:33:38.020267 containerd[1999]: time="2026-01-24T00:33:38.020267138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:33:38.053411 containerd[1999]: time="2026-01-24T00:33:38.053355088Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:33:38.441713 containerd[1999]: time="2026-01-24T00:33:38.441668435Z" level=info msg="CreateContainer within sandbox \"0fced2c133c25610bcb496203e9513d79bcb5cd71be08aff9c034d22dad9df5f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde\"" Jan 24 00:33:38.442451 containerd[1999]: time="2026-01-24T00:33:38.442398059Z" level=info msg="StartContainer for \"5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde\"" Jan 24 00:33:38.560412 systemd[1]: Started cri-containerd-5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde.scope - libcontainer container 5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde. Jan 24 00:33:38.614210 containerd[1999]: time="2026-01-24T00:33:38.614139311Z" level=info msg="StartContainer for \"5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde\" returns successfully" Jan 24 00:33:38.691508 kubelet[2445]: E0124 00:33:38.691297 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:38.727134 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:33:38.765218 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:33:38.765337 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:33:39.674492 kubelet[2445]: E0124 00:33:39.674432 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:39.692095 kubelet[2445]: E0124 00:33:39.692058 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:40.344255 kernel: bpftool[3250]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:33:40.577305 (udev-worker)[3267]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:40.583470 systemd-networkd[1621]: vxlan.calico: Link UP Jan 24 00:33:40.583481 systemd-networkd[1621]: vxlan.calico: Gained carrier Jan 24 00:33:40.617773 (udev-worker)[3102]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:40.693069 kubelet[2445]: E0124 00:33:40.693020 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:41.693812 kubelet[2445]: E0124 00:33:41.693758 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:41.963662 systemd-networkd[1621]: vxlan.calico: Gained IPv6LL Jan 24 00:33:42.694117 kubelet[2445]: E0124 00:33:42.694075 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:43.694414 kubelet[2445]: E0124 00:33:43.694362 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:44.149862 ntpd[1965]: Listen normally on 7 vxlan.calico 192.168.30.192:123 Jan 24 00:33:44.149938 ntpd[1965]: Listen normally on 8 vxlan.calico [fe80::6408:a0ff:fe01:c175%3]:123 Jan 24 00:33:44.150323 ntpd[1965]: 24 Jan 00:33:44 ntpd[1965]: Listen normally on 7 vxlan.calico 192.168.30.192:123 Jan 24 00:33:44.150323 ntpd[1965]: 24 Jan 00:33:44 ntpd[1965]: Listen normally on 8 vxlan.calico [fe80::6408:a0ff:fe01:c175%3]:123 Jan 24 00:33:44.694993 kubelet[2445]: E0124 00:33:44.694936 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:45.695539 kubelet[2445]: E0124 00:33:45.695487 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:45.839634 containerd[1999]: time="2026-01-24T00:33:45.838332865Z" level=info msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" Jan 24 00:33:45.862485 kubelet[2445]: I0124 00:33:45.862438 2445 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:33:45.895552 systemd[1]: run-containerd-runc-k8s.io-5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde-runc.NzhWKX.mount: Deactivated successfully. Jan 24 00:33:45.985510 kubelet[2445]: I0124 00:33:45.985370 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s88k7" podStartSLOduration=12.076465099 podStartE2EDuration="26.985326077s" podCreationTimestamp="2026-01-24 00:33:19 +0000 UTC" firstStartedPulling="2026-01-24 00:33:23.114018634 +0000 UTC m=+3.913262672" lastFinishedPulling="2026-01-24 00:33:38.02287961 +0000 UTC m=+18.822123650" observedRunningTime="2026-01-24 00:33:38.919050696 +0000 UTC m=+19.718294773" watchObservedRunningTime="2026-01-24 00:33:45.985326077 +0000 UTC m=+26.784570132" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.985 [INFO][3331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.993 [INFO][3331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" iface="eth0" netns="/var/run/netns/cni-04b382d8-f8ce-e585-47d0-caaf9312b508" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.995 [INFO][3331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" iface="eth0" netns="/var/run/netns/cni-04b382d8-f8ce-e585-47d0-caaf9312b508" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.997 [INFO][3331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" iface="eth0" netns="/var/run/netns/cni-04b382d8-f8ce-e585-47d0-caaf9312b508" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.997 [INFO][3331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:45.997 [INFO][3331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.144 [INFO][3367] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.144 [INFO][3367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.144 [INFO][3367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.156 [WARNING][3367] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.156 [INFO][3367] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.157 [INFO][3367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:46.161854 containerd[1999]: 2026-01-24 00:33:46.160 [INFO][3331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:33:46.164381 containerd[1999]: time="2026-01-24T00:33:46.161988832Z" level=info msg="TearDown network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" successfully" Jan 24 00:33:46.164381 containerd[1999]: time="2026-01-24T00:33:46.162017746Z" level=info msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" returns successfully" Jan 24 00:33:46.164805 systemd[1]: run-netns-cni\x2d04b382d8\x2df8ce\x2de585\x2d47d0\x2dcaaf9312b508.mount: Deactivated successfully. Jan 24 00:33:46.165302 containerd[1999]: time="2026-01-24T00:33:46.164817539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bqx9m,Uid:dc56aca3-e78d-4c4c-9e51-d34a825d2bbf,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:46.297437 systemd-networkd[1621]: calif031b7a370a: Link UP Jan 24 00:33:46.299569 (udev-worker)[3414]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:46.299985 systemd-networkd[1621]: calif031b7a370a: Gained carrier Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.215 [INFO][3395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.176-k8s-csi--node--driver--bqx9m-eth0 csi-node-driver- calico-system dc56aca3-e78d-4c4c-9e51-d34a825d2bbf 1250 0 2026-01-24 00:33:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.18.176 csi-node-driver-bqx9m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif031b7a370a [] [] }} ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.216 [INFO][3395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.243 [INFO][3407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" HandleID="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.243 [INFO][3407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" HandleID="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.18.176", "pod":"csi-node-driver-bqx9m", "timestamp":"2026-01-24 00:33:46.243810495 +0000 UTC"}, Hostname:"172.31.18.176", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.244 [INFO][3407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.244 [INFO][3407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.244 [INFO][3407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.176' Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.252 [INFO][3407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.262 [INFO][3407] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.267 [INFO][3407] ipam/ipam.go 511: Trying affinity for 192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.269 [INFO][3407] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.272 [INFO][3407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.272 [INFO][3407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.274 [INFO][3407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192 Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.279 [INFO][3407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.286 [INFO][3407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.30.193/26] block=192.168.30.192/26 handle="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.287 [INFO][3407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.193/26] handle="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" host="172.31.18.176" Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.287 [INFO][3407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:46.317332 containerd[1999]: 2026-01-24 00:33:46.287 [INFO][3407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.30.193/26] IPv6=[] ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" HandleID="k8s-pod-network.cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.289 [INFO][3395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-csi--node--driver--bqx9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"", Pod:"csi-node-driver-bqx9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif031b7a370a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.290 [INFO][3395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.193/32] ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.290 [INFO][3395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif031b7a370a ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.302 [INFO][3395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.303 [INFO][3395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-csi--node--driver--bqx9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192", Pod:"csi-node-driver-bqx9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif031b7a370a", MAC:"e6:f0:95:38:c0:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:46.318981 containerd[1999]: 2026-01-24 00:33:46.314 [INFO][3395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192" Namespace="calico-system" Pod="csi-node-driver-bqx9m" WorkloadEndpoint="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:33:46.341039 containerd[1999]: time="2026-01-24T00:33:46.340950553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:46.341039 containerd[1999]: time="2026-01-24T00:33:46.341002767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:46.341879 containerd[1999]: time="2026-01-24T00:33:46.341014096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:46.341879 containerd[1999]: time="2026-01-24T00:33:46.341325940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:46.363565 systemd[1]: Started cri-containerd-cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192.scope - libcontainer container cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192. Jan 24 00:33:46.392328 containerd[1999]: time="2026-01-24T00:33:46.392146946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bqx9m,Uid:dc56aca3-e78d-4c4c-9e51-d34a825d2bbf,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192\"" Jan 24 00:33:46.395101 containerd[1999]: time="2026-01-24T00:33:46.395062241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:46.696422 kubelet[2445]: E0124 00:33:46.696381 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:46.833697 containerd[1999]: time="2026-01-24T00:33:46.833641524Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:46.834810 containerd[1999]: time="2026-01-24T00:33:46.834762615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:46.834915 containerd[1999]: time="2026-01-24T00:33:46.834769614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:46.835054 kubelet[2445]: E0124 00:33:46.835016 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:46.835115 kubelet[2445]: E0124 00:33:46.835068 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:46.835435 kubelet[2445]: E0124 00:33:46.835234 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:46.837543 containerd[1999]: time="2026-01-24T00:33:46.837510870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:47.107547 containerd[1999]: time="2026-01-24T00:33:47.107402816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:47.108622 containerd[1999]: time="2026-01-24T00:33:47.108581086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:47.108738 containerd[1999]: time="2026-01-24T00:33:47.108683988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:47.108914 kubelet[2445]: E0124 00:33:47.108878 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:47.108993 kubelet[2445]: E0124 00:33:47.108928 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:47.109132 kubelet[2445]: E0124 00:33:47.109081 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:47.110535 kubelet[2445]: E0124 00:33:47.110492 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:47.697250 kubelet[2445]: E0124 00:33:47.697189 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:47.921466 kubelet[2445]: E0124 00:33:47.921425 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:48.363686 systemd-networkd[1621]: calif031b7a370a: Gained IPv6LL Jan 24 00:33:48.697981 kubelet[2445]: E0124 00:33:48.697912 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:49.699127 kubelet[2445]: E0124 00:33:49.699085 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:50.700079 kubelet[2445]: E0124 00:33:50.700027 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:51.149977 ntpd[1965]: Listen normally on 9 calif031b7a370a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:33:51.150446 ntpd[1965]: 24 Jan 00:33:51 ntpd[1965]: Listen normally on 9 calif031b7a370a [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:33:51.700337 kubelet[2445]: E0124 00:33:51.700296 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:51.835896 containerd[1999]: time="2026-01-24T00:33:51.835456540Z" level=info msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.882 [INFO][3484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.882 [INFO][3484] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" iface="eth0" netns="/var/run/netns/cni-95fe4585-c108-b049-85aa-518f1611fa34" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.883 [INFO][3484] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" iface="eth0" netns="/var/run/netns/cni-95fe4585-c108-b049-85aa-518f1611fa34" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.884 [INFO][3484] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" iface="eth0" netns="/var/run/netns/cni-95fe4585-c108-b049-85aa-518f1611fa34" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.884 [INFO][3484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.884 [INFO][3484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.908 [INFO][3491] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.908 [INFO][3491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.908 [INFO][3491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.914 [WARNING][3491] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.915 [INFO][3491] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.917 [INFO][3491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:51.920542 containerd[1999]: 2026-01-24 00:33:51.918 [INFO][3484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:33:51.922316 containerd[1999]: time="2026-01-24T00:33:51.922281384Z" level=info msg="TearDown network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" successfully" Jan 24 00:33:51.922316 containerd[1999]: time="2026-01-24T00:33:51.922313406Z" level=info msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" returns successfully" Jan 24 00:33:51.923496 containerd[1999]: time="2026-01-24T00:33:51.923471783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bx8b5,Uid:e455284b-9286-4ee0-9ecc-254a7e2e56a0,Namespace:default,Attempt:1,}" Jan 24 00:33:51.924278 systemd[1]: run-netns-cni\x2d95fe4585\x2dc108\x2db049\x2d85aa\x2d518f1611fa34.mount: Deactivated successfully. Jan 24 00:33:52.053952 systemd-networkd[1621]: cali7630f124611: Link UP Jan 24 00:33:52.056037 systemd-networkd[1621]: cali7630f124611: Gained carrier Jan 24 00:33:52.057077 (udev-worker)[3516]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:51.974 [INFO][3497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0 nginx-deployment-7fcdb87857- default e455284b-9286-4ee0-9ecc-254a7e2e56a0 1298 0 2026-01-24 00:33:36 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.18.176 nginx-deployment-7fcdb87857-bx8b5 eth0 default [] [] [kns.default ksa.default.default] cali7630f124611 [] [] }} ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:51.974 [INFO][3497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:51.999 [INFO][3509] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" HandleID="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.000 [INFO][3509] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" HandleID="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.176", "pod":"nginx-deployment-7fcdb87857-bx8b5", "timestamp":"2026-01-24 00:33:51.999898565 +0000 UTC"}, Hostname:"172.31.18.176", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.000 [INFO][3509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.000 [INFO][3509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.000 [INFO][3509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.176' Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.007 [INFO][3509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.012 [INFO][3509] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.026 [INFO][3509] ipam/ipam.go 511: Trying affinity for 192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.028 [INFO][3509] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.031 [INFO][3509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.031 [INFO][3509] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.034 [INFO][3509] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96 Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.039 [INFO][3509] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.047 [INFO][3509] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.30.194/26] block=192.168.30.192/26 handle="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.047 [INFO][3509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.194/26] handle="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" host="172.31.18.176" Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.047 [INFO][3509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:52.069018 containerd[1999]: 2026-01-24 00:33:52.047 [INFO][3509] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.30.194/26] IPv6=[] ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" HandleID="k8s-pod-network.b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.049 [INFO][3497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e455284b-9286-4ee0-9ecc-254a7e2e56a0", ResourceVersion:"1298", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-bx8b5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7630f124611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.049 [INFO][3497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.194/32] ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.049 [INFO][3497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7630f124611 ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.055 [INFO][3497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.058 [INFO][3497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e455284b-9286-4ee0-9ecc-254a7e2e56a0", ResourceVersion:"1298", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96", Pod:"nginx-deployment-7fcdb87857-bx8b5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7630f124611", MAC:"de:64:10:10:bc:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:52.069608 containerd[1999]: 2026-01-24 00:33:52.066 [INFO][3497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96" Namespace="default" Pod="nginx-deployment-7fcdb87857-bx8b5" WorkloadEndpoint="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:33:52.091771 containerd[1999]: time="2026-01-24T00:33:52.091502271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:52.091936 containerd[1999]: time="2026-01-24T00:33:52.091573824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:52.091936 containerd[1999]: time="2026-01-24T00:33:52.091855481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:52.092118 containerd[1999]: time="2026-01-24T00:33:52.092046166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:52.118388 systemd[1]: Started cri-containerd-b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96.scope - libcontainer container b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96. Jan 24 00:33:52.170290 containerd[1999]: time="2026-01-24T00:33:52.170242966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bx8b5,Uid:e455284b-9286-4ee0-9ecc-254a7e2e56a0,Namespace:default,Attempt:1,} returns sandbox id \"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96\"" Jan 24 00:33:52.172316 containerd[1999]: time="2026-01-24T00:33:52.172270042Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:33:52.701037 kubelet[2445]: E0124 00:33:52.700966 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:52.923103 systemd[1]: run-containerd-runc-k8s.io-b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96-runc.hxHxIo.mount: Deactivated successfully. Jan 24 00:33:53.634024 update_engine[1973]: I20260124 00:33:53.633224 1973 update_attempter.cc:509] Updating boot flags... Jan 24 00:33:53.702045 kubelet[2445]: E0124 00:33:53.701986 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:53.729914 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3587) Jan 24 00:33:53.996077 systemd-networkd[1621]: cali7630f124611: Gained IPv6LL Jan 24 00:33:54.016458 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3518) Jan 24 00:33:54.702406 kubelet[2445]: E0124 00:33:54.702342 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:55.121227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765982851.mount: Deactivated successfully. Jan 24 00:33:55.702861 kubelet[2445]: E0124 00:33:55.702826 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:56.149868 ntpd[1965]: Listen normally on 10 cali7630f124611 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:33:56.150251 ntpd[1965]: 24 Jan 00:33:56 ntpd[1965]: Listen normally on 10 cali7630f124611 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:33:56.176931 containerd[1999]: time="2026-01-24T00:33:56.176876821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:56.178847 containerd[1999]: time="2026-01-24T00:33:56.178677443Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 24 00:33:56.180952 containerd[1999]: time="2026-01-24T00:33:56.180592827Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:56.184303 containerd[1999]: time="2026-01-24T00:33:56.184247800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:56.185258 containerd[1999]: time="2026-01-24T00:33:56.185110142Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 4.012802621s" Jan 24 00:33:56.185258 containerd[1999]: time="2026-01-24T00:33:56.185141414Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:33:56.200611 containerd[1999]: time="2026-01-24T00:33:56.200567436Z" level=info msg="CreateContainer within sandbox \"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 24 00:33:56.221830 containerd[1999]: time="2026-01-24T00:33:56.221784726Z" level=info msg="CreateContainer within sandbox \"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f71791e3b990fe1b0285ec585d6403a182f79c3b4eec3c2852a4109f8fbe8172\"" Jan 24 00:33:56.222641 containerd[1999]: time="2026-01-24T00:33:56.222604089Z" level=info msg="StartContainer for \"f71791e3b990fe1b0285ec585d6403a182f79c3b4eec3c2852a4109f8fbe8172\"" Jan 24 00:33:56.252409 systemd[1]: Started cri-containerd-f71791e3b990fe1b0285ec585d6403a182f79c3b4eec3c2852a4109f8fbe8172.scope - libcontainer container f71791e3b990fe1b0285ec585d6403a182f79c3b4eec3c2852a4109f8fbe8172. Jan 24 00:33:56.284921 containerd[1999]: time="2026-01-24T00:33:56.284867687Z" level=info msg="StartContainer for \"f71791e3b990fe1b0285ec585d6403a182f79c3b4eec3c2852a4109f8fbe8172\" returns successfully" Jan 24 00:33:56.703612 kubelet[2445]: E0124 00:33:56.703546 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:57.704129 kubelet[2445]: E0124 00:33:57.704080 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:58.704580 kubelet[2445]: E0124 00:33:58.704521 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:58.836243 containerd[1999]: time="2026-01-24T00:33:58.836056540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:58.847473 kubelet[2445]: I0124 00:33:58.847405 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-bx8b5" podStartSLOduration=18.833074388 podStartE2EDuration="22.847386645s" podCreationTimestamp="2026-01-24 00:33:36 +0000 UTC" firstStartedPulling="2026-01-24 00:33:52.171672127 +0000 UTC m=+32.970916167" lastFinishedPulling="2026-01-24 00:33:56.185984385 +0000 UTC m=+36.985228424" observedRunningTime="2026-01-24 00:33:56.954047835 +0000 UTC m=+37.753291910" watchObservedRunningTime="2026-01-24 00:33:58.847386645 +0000 UTC m=+39.646630705" Jan 24 00:33:59.102121 containerd[1999]: time="2026-01-24T00:33:59.101989865Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:59.103834 containerd[1999]: time="2026-01-24T00:33:59.103742729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:59.103834 containerd[1999]: time="2026-01-24T00:33:59.103783977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:59.104000 kubelet[2445]: E0124 00:33:59.103959 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:59.104051 kubelet[2445]: E0124 00:33:59.104002 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:59.104176 kubelet[2445]: E0124 00:33:59.104132 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:59.106393 containerd[1999]: time="2026-01-24T00:33:59.106364656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:59.348290 containerd[1999]: time="2026-01-24T00:33:59.348245408Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:59.350292 containerd[1999]: time="2026-01-24T00:33:59.350157789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:59.350292 containerd[1999]: time="2026-01-24T00:33:59.350189007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:59.350436 kubelet[2445]: E0124 00:33:59.350392 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:59.350478 kubelet[2445]: E0124 00:33:59.350447 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:59.350616 kubelet[2445]: E0124 00:33:59.350574 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:59.351888 kubelet[2445]: E0124 00:33:59.351849 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:33:59.675482 kubelet[2445]: E0124 00:33:59.675419 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:33:59.704946 kubelet[2445]: E0124 00:33:59.704879 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:00.705411 kubelet[2445]: E0124 00:34:00.705370 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:01.706631 kubelet[2445]: E0124 00:34:01.706313 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:02.707453 kubelet[2445]: E0124 00:34:02.707375 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:03.707822 kubelet[2445]: E0124 00:34:03.707762 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:04.707938 kubelet[2445]: E0124 00:34:04.707882 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:05.708830 kubelet[2445]: E0124 00:34:05.708772 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:06.709344 kubelet[2445]: E0124 00:34:06.709288 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:07.710355 kubelet[2445]: E0124 00:34:07.710308 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:08.711143 kubelet[2445]: E0124 00:34:08.711007 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:09.711317 kubelet[2445]: E0124 00:34:09.711272 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:10.712304 kubelet[2445]: E0124 00:34:10.712253 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:10.722503 systemd[1]: Created slice kubepods-besteffort-poda8685474_b834_4c57_9f90_93cb8b408805.slice - libcontainer container kubepods-besteffort-poda8685474_b834_4c57_9f90_93cb8b408805.slice. Jan 24 00:34:10.744201 kubelet[2445]: I0124 00:34:10.744147 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a8685474-b834-4c57-9f90-93cb8b408805-data\") pod \"nfs-server-provisioner-0\" (UID: \"a8685474-b834-4c57-9f90-93cb8b408805\") " pod="default/nfs-server-provisioner-0" Jan 24 00:34:10.744377 kubelet[2445]: I0124 00:34:10.744211 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfbz\" (UniqueName: \"kubernetes.io/projected/a8685474-b834-4c57-9f90-93cb8b408805-kube-api-access-ckfbz\") pod \"nfs-server-provisioner-0\" (UID: \"a8685474-b834-4c57-9f90-93cb8b408805\") " pod="default/nfs-server-provisioner-0" Jan 24 00:34:11.026798 containerd[1999]: time="2026-01-24T00:34:11.026674837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a8685474-b834-4c57-9f90-93cb8b408805,Namespace:default,Attempt:0,}" Jan 24 00:34:11.239919 systemd-networkd[1621]: cali60e51b789ff: Link UP Jan 24 00:34:11.240265 systemd-networkd[1621]: cali60e51b789ff: Gained carrier Jan 24 00:34:11.243632 (udev-worker)[3872]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.092 [INFO][3853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.176-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a8685474-b834-4c57-9f90-93cb8b408805 1410 0 2026-01-24 00:34:10 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.18.176 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.092 [INFO][3853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.140 [INFO][3864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" HandleID="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Workload="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.140 [INFO][3864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" HandleID="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Workload="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.176", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-24 00:34:11.140560297 +0000 UTC"}, Hostname:"172.31.18.176", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.140 [INFO][3864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.140 [INFO][3864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.140 [INFO][3864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.176' Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.147 [INFO][3864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.152 [INFO][3864] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.186 [INFO][3864] ipam/ipam.go 511: Trying affinity for 192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.194 [INFO][3864] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.201 [INFO][3864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.201 [INFO][3864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.209 [INFO][3864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598 Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.215 [INFO][3864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.234 [INFO][3864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.30.195/26] block=192.168.30.192/26 handle="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.234 [INFO][3864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.195/26] handle="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" host="172.31.18.176" Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.234 [INFO][3864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:11.260656 containerd[1999]: 2026-01-24 00:34:11.234 [INFO][3864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.30.195/26] IPv6=[] ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" HandleID="k8s-pod-network.1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Workload="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.262153 containerd[1999]: 2026-01-24 00:34:11.235 [INFO][3853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a8685474-b834-4c57-9f90-93cb8b408805", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:11.262153 containerd[1999]: 2026-01-24 00:34:11.236 [INFO][3853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.195/32] ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.262153 containerd[1999]: 2026-01-24 00:34:11.236 [INFO][3853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.262153 containerd[1999]: 2026-01-24 00:34:11.238 [INFO][3853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.262818 containerd[1999]: 2026-01-24 00:34:11.239 [INFO][3853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a8685474-b834-4c57-9f90-93cb8b408805", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 34, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"76:81:71:f3:3a:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:11.262818 containerd[1999]: 2026-01-24 00:34:11.258 [INFO][3853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.176-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:34:11.287845 containerd[1999]: time="2026-01-24T00:34:11.287479316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:34:11.287845 containerd[1999]: time="2026-01-24T00:34:11.287558273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:34:11.287845 containerd[1999]: time="2026-01-24T00:34:11.287582718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:34:11.288068 containerd[1999]: time="2026-01-24T00:34:11.287697650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:34:11.325409 systemd[1]: Started cri-containerd-1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598.scope - libcontainer container 1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598. Jan 24 00:34:11.372757 containerd[1999]: time="2026-01-24T00:34:11.372704243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a8685474-b834-4c57-9f90-93cb8b408805,Namespace:default,Attempt:0,} returns sandbox id \"1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598\"" Jan 24 00:34:11.375102 containerd[1999]: time="2026-01-24T00:34:11.374794288Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 24 00:34:11.712622 kubelet[2445]: E0124 00:34:11.712577 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:11.835854 kubelet[2445]: E0124 00:34:11.835799 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:34:12.435960 systemd-networkd[1621]: cali60e51b789ff: Gained IPv6LL Jan 24 00:34:12.713650 kubelet[2445]: E0124 00:34:12.713525 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:13.714354 kubelet[2445]: E0124 00:34:13.714313 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:13.969453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027151041.mount: Deactivated successfully. Jan 24 00:34:14.714982 kubelet[2445]: E0124 00:34:14.714946 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:15.150341 ntpd[1965]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:34:15.150795 ntpd[1965]: 24 Jan 00:34:15 ntpd[1965]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:34:15.716634 kubelet[2445]: E0124 00:34:15.716427 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:16.088489 containerd[1999]: time="2026-01-24T00:34:16.088115551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:16.089524 containerd[1999]: time="2026-01-24T00:34:16.089304078Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 24 00:34:16.091603 containerd[1999]: time="2026-01-24T00:34:16.090398804Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:16.093116 containerd[1999]: time="2026-01-24T00:34:16.093089013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:16.093933 containerd[1999]: time="2026-01-24T00:34:16.093903151Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.719067774s" Jan 24 00:34:16.093988 containerd[1999]: time="2026-01-24T00:34:16.093939047Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 24 00:34:16.173617 containerd[1999]: time="2026-01-24T00:34:16.173573822Z" level=info msg="CreateContainer within sandbox \"1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 24 00:34:16.186953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896459465.mount: Deactivated successfully. Jan 24 00:34:16.190434 containerd[1999]: time="2026-01-24T00:34:16.190379428Z" level=info msg="CreateContainer within sandbox \"1edc5a2ebb463c89a46c3abddf39a2cc2b002de9415e898cd13d5d3ac118d598\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7\"" Jan 24 00:34:16.191203 containerd[1999]: time="2026-01-24T00:34:16.191158131Z" level=info msg="StartContainer for \"e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7\"" Jan 24 00:34:16.236405 systemd[1]: Started cri-containerd-e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7.scope - libcontainer container e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7. Jan 24 00:34:16.282845 containerd[1999]: time="2026-01-24T00:34:16.282707225Z" level=info msg="StartContainer for \"e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7\" returns successfully" Jan 24 00:34:16.717113 kubelet[2445]: E0124 00:34:16.717062 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:17.053426 systemd[1]: run-containerd-runc-k8s.io-e80190032717610a78067828268b23bb77a303890644149cde1915a0bc1169b7-runc.VDqTxq.mount: Deactivated successfully. Jan 24 00:34:17.717550 kubelet[2445]: E0124 00:34:17.717498 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:18.717723 kubelet[2445]: E0124 00:34:18.717647 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:19.675262 kubelet[2445]: E0124 00:34:19.675198 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:19.720081 kubelet[2445]: E0124 00:34:19.720029 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:19.743921 containerd[1999]: time="2026-01-24T00:34:19.743877691Z" level=info msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.781 [WARNING][4042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-csi--node--driver--bqx9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192", Pod:"csi-node-driver-bqx9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif031b7a370a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.782 [INFO][4042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.782 [INFO][4042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" iface="eth0" netns="" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.782 [INFO][4042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.782 [INFO][4042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.818 [INFO][4049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.819 [INFO][4049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.819 [INFO][4049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.825 [WARNING][4049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.825 [INFO][4049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.827 [INFO][4049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:19.831111 containerd[1999]: 2026-01-24 00:34:19.829 [INFO][4042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.832038 containerd[1999]: time="2026-01-24T00:34:19.831267488Z" level=info msg="TearDown network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" successfully" Jan 24 00:34:19.832038 containerd[1999]: time="2026-01-24T00:34:19.831298459Z" level=info msg="StopPodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" returns successfully" Jan 24 00:34:19.858810 containerd[1999]: time="2026-01-24T00:34:19.858742909Z" level=info msg="RemovePodSandbox for \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" Jan 24 00:34:19.858810 containerd[1999]: time="2026-01-24T00:34:19.858801186Z" level=info msg="Forcibly stopping sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\"" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.928 [WARNING][4065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-csi--node--driver--bqx9m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc56aca3-e78d-4c4c-9e51-d34a825d2bbf", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"cfc9eac2ee37d856996d068bd7f4e41bd4838d56c1d652d6ded6d2e099246192", Pod:"csi-node-driver-bqx9m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif031b7a370a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.929 [INFO][4065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.929 [INFO][4065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" iface="eth0" netns="" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.929 [INFO][4065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.929 [INFO][4065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.953 [INFO][4073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.953 [INFO][4073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.953 [INFO][4073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.962 [WARNING][4073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.962 [INFO][4073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" HandleID="k8s-pod-network.296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Workload="172.31.18.176-k8s-csi--node--driver--bqx9m-eth0" Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.963 [INFO][4073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:19.966706 containerd[1999]: 2026-01-24 00:34:19.965 [INFO][4065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86" Jan 24 00:34:19.966706 containerd[1999]: time="2026-01-24T00:34:19.966651702Z" level=info msg="TearDown network for sandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" successfully" Jan 24 00:34:20.010178 containerd[1999]: time="2026-01-24T00:34:20.010100757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:20.010340 containerd[1999]: time="2026-01-24T00:34:20.010210645Z" level=info msg="RemovePodSandbox \"296eb00d4ebed619003079d93c638b78c49fa8a01f026e8278f0231104382f86\" returns successfully" Jan 24 00:34:20.010836 containerd[1999]: time="2026-01-24T00:34:20.010812964Z" level=info msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.072 [WARNING][4087] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e455284b-9286-4ee0-9ecc-254a7e2e56a0", ResourceVersion:"1320", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96", Pod:"nginx-deployment-7fcdb87857-bx8b5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7630f124611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.072 [INFO][4087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.072 [INFO][4087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" iface="eth0" netns="" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.072 [INFO][4087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.072 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.095 [INFO][4094] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.095 [INFO][4094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.095 [INFO][4094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.104 [WARNING][4094] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.104 [INFO][4094] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.108 [INFO][4094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:20.110632 containerd[1999]: 2026-01-24 00:34:20.109 [INFO][4087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.111531 containerd[1999]: time="2026-01-24T00:34:20.110673780Z" level=info msg="TearDown network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" successfully" Jan 24 00:34:20.111531 containerd[1999]: time="2026-01-24T00:34:20.110696021Z" level=info msg="StopPodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" returns successfully" Jan 24 00:34:20.111531 containerd[1999]: time="2026-01-24T00:34:20.111324160Z" level=info msg="RemovePodSandbox for \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" Jan 24 00:34:20.111531 containerd[1999]: time="2026-01-24T00:34:20.111352559Z" level=info msg="Forcibly stopping sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\"" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.145 [WARNING][4108] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e455284b-9286-4ee0-9ecc-254a7e2e56a0", ResourceVersion:"1320", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"b4b75aaca2751d0d96ac7c3b9a9efa8410b6f38cbd29153115916defcfab8f96", Pod:"nginx-deployment-7fcdb87857-bx8b5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7630f124611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.146 [INFO][4108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.146 [INFO][4108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" iface="eth0" netns="" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.146 [INFO][4108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.146 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.206 [INFO][4115] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.206 [INFO][4115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.206 [INFO][4115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.218 [WARNING][4115] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.226 [INFO][4115] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" HandleID="k8s-pod-network.469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Workload="172.31.18.176-k8s-nginx--deployment--7fcdb87857--bx8b5-eth0" Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.229 [INFO][4115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:20.232589 containerd[1999]: 2026-01-24 00:34:20.231 [INFO][4108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9" Jan 24 00:34:20.233282 containerd[1999]: time="2026-01-24T00:34:20.232589201Z" level=info msg="TearDown network for sandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" successfully" Jan 24 00:34:20.235244 containerd[1999]: time="2026-01-24T00:34:20.235137908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:20.235360 containerd[1999]: time="2026-01-24T00:34:20.235251863Z" level=info msg="RemovePodSandbox \"469809a51e23a961dabda4e7f122dcc9c6d9e489ad6e9a6071528ee9d8d9b1d9\" returns successfully" Jan 24 00:34:20.720519 kubelet[2445]: E0124 00:34:20.720476 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:21.720804 kubelet[2445]: E0124 00:34:21.720768 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:22.721925 kubelet[2445]: E0124 00:34:22.721884 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:23.722854 kubelet[2445]: E0124 00:34:23.722799 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:24.723749 kubelet[2445]: E0124 00:34:24.723706 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:24.836070 containerd[1999]: time="2026-01-24T00:34:24.836032282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:34:24.848581 kubelet[2445]: I0124 00:34:24.848326 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.107530071 podStartE2EDuration="14.848308984s" podCreationTimestamp="2026-01-24 00:34:10 +0000 UTC" firstStartedPulling="2026-01-24 00:34:11.374509566 +0000 UTC m=+52.173753606" lastFinishedPulling="2026-01-24 00:34:16.115288482 +0000 UTC m=+56.914532519" observedRunningTime="2026-01-24 00:34:17.05592423 +0000 UTC m=+57.855168315" watchObservedRunningTime="2026-01-24 00:34:24.848308984 +0000 UTC m=+65.647553043" Jan 24 00:34:25.097137 containerd[1999]: time="2026-01-24T00:34:25.097008797Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:25.098302 containerd[1999]: time="2026-01-24T00:34:25.098226024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:34:25.098437 containerd[1999]: time="2026-01-24T00:34:25.098320047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:34:25.098530 kubelet[2445]: E0124 00:34:25.098455 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:25.098530 kubelet[2445]: E0124 00:34:25.098504 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:25.098680 kubelet[2445]: E0124 00:34:25.098627 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:25.100881 containerd[1999]: time="2026-01-24T00:34:25.100839106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:34:25.368837 containerd[1999]: time="2026-01-24T00:34:25.368790713Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:25.369837 containerd[1999]: time="2026-01-24T00:34:25.369796094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:34:25.370077 containerd[1999]: time="2026-01-24T00:34:25.369821513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:34:25.370122 kubelet[2445]: E0124 00:34:25.370044 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:25.370122 kubelet[2445]: E0124 00:34:25.370093 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:25.370319 kubelet[2445]: E0124 00:34:25.370262 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:25.371756 kubelet[2445]: E0124 00:34:25.371713 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:34:25.724426 kubelet[2445]: E0124 00:34:25.724298 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:26.724601 kubelet[2445]: E0124 00:34:26.724555 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:27.724939 kubelet[2445]: E0124 00:34:27.724873 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:28.725474 kubelet[2445]: E0124 00:34:28.725421 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:29.725578 kubelet[2445]: E0124 00:34:29.725533 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:30.725779 kubelet[2445]: E0124 00:34:30.725724 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:31.725916 kubelet[2445]: E0124 00:34:31.725867 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:32.726483 kubelet[2445]: E0124 00:34:32.726440 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:33.727515 kubelet[2445]: E0124 00:34:33.727457 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:34.727870 kubelet[2445]: E0124 00:34:34.727778 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:35.728384 kubelet[2445]: E0124 00:34:35.728324 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:35.835856 kubelet[2445]: E0124 00:34:35.835807 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:34:36.585547 systemd[1]: Created slice kubepods-besteffort-poda5cfcee8_1bcf_40bb_9e30_fe28a55e9cee.slice - libcontainer container kubepods-besteffort-poda5cfcee8_1bcf_40bb_9e30_fe28a55e9cee.slice. Jan 24 00:34:36.641203 kubelet[2445]: I0124 00:34:36.641134 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-148cc2e6-54e8-4417-a5e1-15259e2a96d3\" (UniqueName: \"kubernetes.io/nfs/a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee-pvc-148cc2e6-54e8-4417-a5e1-15259e2a96d3\") pod \"test-pod-1\" (UID: \"a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee\") " pod="default/test-pod-1" Jan 24 00:34:36.641324 kubelet[2445]: I0124 00:34:36.641217 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xczg\" (UniqueName: \"kubernetes.io/projected/a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee-kube-api-access-6xczg\") pod \"test-pod-1\" (UID: \"a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee\") " pod="default/test-pod-1" Jan 24 00:34:36.728693 kubelet[2445]: E0124 00:34:36.728625 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:36.839228 kernel: FS-Cache: Loaded Jan 24 00:34:36.941344 kernel: RPC: Registered named UNIX socket transport module. Jan 24 00:34:36.941507 kernel: RPC: Registered udp transport module. Jan 24 00:34:36.941545 kernel: RPC: Registered tcp transport module. Jan 24 00:34:36.942402 kernel: RPC: Registered tcp-with-tls transport module. Jan 24 00:34:36.943430 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 24 00:34:37.266495 kernel: NFS: Registering the id_resolver key type Jan 24 00:34:37.266632 kernel: Key type id_resolver registered Jan 24 00:34:37.266664 kernel: Key type id_legacy registered Jan 24 00:34:37.304810 nfsidmap[4163]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 24 00:34:37.309254 nfsidmap[4164]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 24 00:34:37.489191 containerd[1999]: time="2026-01-24T00:34:37.489122421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee,Namespace:default,Attempt:0,}" Jan 24 00:34:37.688379 (udev-worker)[4162]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:34:37.689753 systemd-networkd[1621]: cali5ec59c6bf6e: Link UP Jan 24 00:34:37.691257 systemd-networkd[1621]: cali5ec59c6bf6e: Gained carrier Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.543 [INFO][4166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.176-k8s-test--pod--1-eth0 default a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee 1553 0 2026-01-24 00:34:12 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.18.176 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.543 [INFO][4166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.605 [INFO][4177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" HandleID="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Workload="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.605 [INFO][4177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" HandleID="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Workload="172.31.18.176-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.176", "pod":"test-pod-1", "timestamp":"2026-01-24 00:34:37.605598021 +0000 UTC"}, Hostname:"172.31.18.176", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.605 [INFO][4177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.605 [INFO][4177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.605 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.176' Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.619 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.632 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.638 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.644 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.658 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.658 [INFO][4177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.660 [INFO][4177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97 Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.667 [INFO][4177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.683 [INFO][4177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.30.196/26] block=192.168.30.192/26 handle="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.683 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.196/26] handle="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" host="172.31.18.176" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.683 [INFO][4177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.683 [INFO][4177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.30.196/26] IPv6=[] ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" HandleID="k8s-pod-network.b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Workload="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709000 containerd[1999]: 2026-01-24 00:34:37.685 [INFO][4166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee", ResourceVersion:"1553", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 34, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:37.709943 containerd[1999]: 2026-01-24 00:34:37.685 [INFO][4166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.196/32] ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709943 containerd[1999]: 2026-01-24 00:34:37.685 [INFO][4166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709943 containerd[1999]: 2026-01-24 00:34:37.691 [INFO][4166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.709943 containerd[1999]: 2026-01-24 00:34:37.692 [INFO][4166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.176-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee", ResourceVersion:"1553", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 34, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.176", ContainerID:"b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5a:5f:84:0c:b1:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:37.709943 containerd[1999]: 2026-01-24 00:34:37.707 [INFO][4166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.176-k8s-test--pod--1-eth0" Jan 24 00:34:37.729270 kubelet[2445]: E0124 00:34:37.729164 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:37.734489 containerd[1999]: time="2026-01-24T00:34:37.734359873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:34:37.734489 containerd[1999]: time="2026-01-24T00:34:37.734439621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:34:37.734489 containerd[1999]: time="2026-01-24T00:34:37.734462194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:34:37.734739 containerd[1999]: time="2026-01-24T00:34:37.734569669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:34:37.768372 systemd[1]: Started cri-containerd-b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97.scope - libcontainer container b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97. Jan 24 00:34:37.818825 containerd[1999]: time="2026-01-24T00:34:37.818719331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a5cfcee8-1bcf-40bb-9e30-fe28a55e9cee,Namespace:default,Attempt:0,} returns sandbox id \"b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97\"" Jan 24 00:34:37.827414 containerd[1999]: time="2026-01-24T00:34:37.827356271Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:34:38.141115 containerd[1999]: time="2026-01-24T00:34:38.141059581Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:34:38.143064 containerd[1999]: time="2026-01-24T00:34:38.142992996Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 24 00:34:38.160966 containerd[1999]: time="2026-01-24T00:34:38.160895999Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 318.434342ms" Jan 24 00:34:38.160966 containerd[1999]: time="2026-01-24T00:34:38.160965099Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:34:38.166395 containerd[1999]: time="2026-01-24T00:34:38.166274541Z" level=info msg="CreateContainer within sandbox \"b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 24 00:34:38.181518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089201001.mount: Deactivated successfully. Jan 24 00:34:38.184958 containerd[1999]: time="2026-01-24T00:34:38.184896122Z" level=info msg="CreateContainer within sandbox \"b1a98a49e8c60e0cd9cbacaad2cd3ff9b77456fa6f23062f8c7726223c153c97\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9704b0173aec804e8c4364873d7eb63f517b6827b5bc566d318a6efe6d62aad8\"" Jan 24 00:34:38.186394 containerd[1999]: time="2026-01-24T00:34:38.185442258Z" level=info msg="StartContainer for \"9704b0173aec804e8c4364873d7eb63f517b6827b5bc566d318a6efe6d62aad8\"" Jan 24 00:34:38.213404 systemd[1]: Started cri-containerd-9704b0173aec804e8c4364873d7eb63f517b6827b5bc566d318a6efe6d62aad8.scope - libcontainer container 9704b0173aec804e8c4364873d7eb63f517b6827b5bc566d318a6efe6d62aad8. Jan 24 00:34:38.250319 containerd[1999]: time="2026-01-24T00:34:38.248564184Z" level=info msg="StartContainer for \"9704b0173aec804e8c4364873d7eb63f517b6827b5bc566d318a6efe6d62aad8\" returns successfully" Jan 24 00:34:38.729885 kubelet[2445]: E0124 00:34:38.729828 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:38.987495 systemd-networkd[1621]: cali5ec59c6bf6e: Gained IPv6LL Jan 24 00:34:39.675343 kubelet[2445]: E0124 00:34:39.675301 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:39.730982 kubelet[2445]: E0124 00:34:39.730938 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:40.731745 kubelet[2445]: E0124 00:34:40.731692 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:41.149930 ntpd[1965]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:34:41.150324 ntpd[1965]: 24 Jan 00:34:41 ntpd[1965]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:34:41.732298 kubelet[2445]: E0124 00:34:41.732245 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:42.732496 kubelet[2445]: E0124 00:34:42.732439 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:43.732657 kubelet[2445]: E0124 00:34:43.732596 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:44.733439 kubelet[2445]: E0124 00:34:44.733396 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:45.734593 kubelet[2445]: E0124 00:34:45.734544 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:46.735256 kubelet[2445]: E0124 00:34:46.735215 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:47.736436 kubelet[2445]: E0124 00:34:47.736378 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:48.736858 kubelet[2445]: E0124 00:34:48.736785 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:49.737201 kubelet[2445]: E0124 00:34:49.737113 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:49.836697 kubelet[2445]: E0124 00:34:49.836619 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:34:50.737764 kubelet[2445]: E0124 00:34:50.737692 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:51.738950 kubelet[2445]: E0124 00:34:51.738905 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:52.739463 kubelet[2445]: E0124 00:34:52.739407 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:53.740604 kubelet[2445]: E0124 00:34:53.740538 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:54.741439 kubelet[2445]: E0124 00:34:54.741397 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:55.742072 kubelet[2445]: E0124 00:34:55.742007 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:56.742714 kubelet[2445]: E0124 00:34:56.742544 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:57.743510 kubelet[2445]: E0124 00:34:57.743471 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:58.744613 kubelet[2445]: E0124 00:34:58.744564 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:59.675329 kubelet[2445]: E0124 00:34:59.675278 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:34:59.745228 kubelet[2445]: E0124 00:34:59.745162 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:00.745940 kubelet[2445]: E0124 00:35:00.745898 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:00.836415 kubelet[2445]: E0124 00:35:00.836360 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:35:01.747849 kubelet[2445]: E0124 00:35:01.747795 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:02.037534 kubelet[2445]: E0124 00:35:02.037248 2445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 24 00:35:02.748474 kubelet[2445]: E0124 00:35:02.748417 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:03.749386 kubelet[2445]: E0124 00:35:03.749339 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:04.749792 kubelet[2445]: E0124 00:35:04.749737 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:05.750008 kubelet[2445]: E0124 00:35:05.749934 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:06.750185 kubelet[2445]: E0124 00:35:06.750125 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:07.750564 kubelet[2445]: E0124 00:35:07.750436 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:08.751522 kubelet[2445]: E0124 00:35:08.751481 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:09.752489 kubelet[2445]: E0124 00:35:09.752449 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:10.753324 kubelet[2445]: E0124 00:35:10.753267 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:11.753684 kubelet[2445]: E0124 00:35:11.753634 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:11.836118 containerd[1999]: time="2026-01-24T00:35:11.835915264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:35:12.038304 kubelet[2445]: E0124 00:35:12.038103 2445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:35:12.079430 containerd[1999]: time="2026-01-24T00:35:12.079386543Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:35:12.080362 containerd[1999]: time="2026-01-24T00:35:12.080318539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:35:12.080497 containerd[1999]: time="2026-01-24T00:35:12.080397647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:35:12.080576 kubelet[2445]: E0124 00:35:12.080540 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:35:12.082041 kubelet[2445]: E0124 00:35:12.080586 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:35:12.082225 kubelet[2445]: E0124 00:35:12.082184 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:12.084306 containerd[1999]: time="2026-01-24T00:35:12.084263316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:35:12.362505 containerd[1999]: time="2026-01-24T00:35:12.362297844Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:35:12.363693 containerd[1999]: time="2026-01-24T00:35:12.363578692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:35:12.363693 containerd[1999]: time="2026-01-24T00:35:12.363645197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:35:12.363844 kubelet[2445]: E0124 00:35:12.363796 2445 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:35:12.363898 kubelet[2445]: E0124 00:35:12.363842 2445 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:35:12.364007 kubelet[2445]: E0124 00:35:12.363964 2445 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtphm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bqx9m_calico-system(dc56aca3-e78d-4c4c-9e51-d34a825d2bbf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:12.365250 kubelet[2445]: E0124 00:35:12.365182 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:35:12.753850 kubelet[2445]: E0124 00:35:12.753794 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:13.754684 kubelet[2445]: E0124 00:35:13.754638 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:14.754869 kubelet[2445]: E0124 00:35:14.754810 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:15.755336 kubelet[2445]: E0124 00:35:15.755285 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:16.032421 systemd[1]: run-containerd-runc-k8s.io-5c1252abd2943e1dd2108c8d4ac869af83e215289e7af4b3ea89c0badd821dde-runc.axCvDw.mount: Deactivated successfully. Jan 24 00:35:16.756254 kubelet[2445]: E0124 00:35:16.756202 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:17.756669 kubelet[2445]: E0124 00:35:17.756585 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:18.757811 kubelet[2445]: E0124 00:35:18.757742 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:19.675424 kubelet[2445]: E0124 00:35:19.675343 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:19.758710 kubelet[2445]: E0124 00:35:19.758665 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:20.759716 kubelet[2445]: E0124 00:35:20.759658 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:21.672904 kubelet[2445]: E0124 00:35:21.672117 2445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": unexpected EOF" Jan 24 00:35:21.675983 kubelet[2445]: E0124 00:35:21.675880 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.16.136:6443/api/v1/namespaces/calico-system/events/csi-node-driver-bqx9m.188d83832591f16f\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-bqx9m.188d83832591f16f calico-system 1537 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-bqx9m,UID:dc56aca3-e78d-4c4c-9e51-d34a825d2bbf,APIVersion:v1,ResourceVersion:971,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.18.176,},FirstTimestamp:2026-01-24 00:33:47 +0000 UTC,LastTimestamp:2026-01-24 00:34:49.835919048 +0000 UTC m=+90.635163089,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.176,}" Jan 24 00:35:21.677153 kubelet[2445]: E0124 00:35:21.677117 2445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection reset by peer" Jan 24 00:35:21.678510 kubelet[2445]: E0124 00:35:21.677627 2445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" Jan 24 00:35:21.678510 kubelet[2445]: I0124 00:35:21.677655 2445 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 24 00:35:21.678510 kubelet[2445]: E0124 00:35:21.677987 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="200ms" Jan 24 00:35:21.759847 kubelet[2445]: E0124 00:35:21.759787 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:21.878795 kubelet[2445]: E0124 00:35:21.878758 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="400ms" Jan 24 00:35:22.280392 kubelet[2445]: E0124 00:35:22.280351 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="800ms" Jan 24 00:35:22.678055 kubelet[2445]: I0124 00:35:22.677785 2445 status_manager.go:895] "Failed to get status for pod" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" pod="calico-system/csi-node-driver-bqx9m" err="Get \"https://172.31.16.136:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-bqx9m\": dial tcp 172.31.16.136:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 24 00:35:22.679128 kubelet[2445]: I0124 00:35:22.679085 2445 status_manager.go:895] "Failed to get status for pod" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" pod="calico-system/csi-node-driver-bqx9m" err="Get \"https://172.31.16.136:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-bqx9m\": dial tcp 172.31.16.136:6443: connect: connection refused" Jan 24 00:35:22.760686 kubelet[2445]: E0124 00:35:22.760640 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:23.760851 kubelet[2445]: E0124 00:35:23.760809 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:24.761517 kubelet[2445]: E0124 00:35:24.761458 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:24.835833 kubelet[2445]: E0124 00:35:24.835780 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:35:25.762574 kubelet[2445]: E0124 00:35:25.762509 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:26.762980 kubelet[2445]: E0124 00:35:26.762910 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:27.763693 kubelet[2445]: E0124 00:35:27.763640 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:28.764440 kubelet[2445]: E0124 00:35:28.764341 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:29.765565 kubelet[2445]: E0124 00:35:29.765503 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:30.765760 kubelet[2445]: E0124 00:35:30.765702 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:31.766592 kubelet[2445]: E0124 00:35:31.766540 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:32.767712 kubelet[2445]: E0124 00:35:32.767639 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:33.081985 kubelet[2445]: E0124 00:35:33.081863 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 24 00:35:33.768686 kubelet[2445]: E0124 00:35:33.768636 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:34.769325 kubelet[2445]: E0124 00:35:34.769270 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:35.770490 kubelet[2445]: E0124 00:35:35.770416 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:36.771520 kubelet[2445]: E0124 00:35:36.771473 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:37.772421 kubelet[2445]: E0124 00:35:37.772364 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:38.773071 kubelet[2445]: E0124 00:35:38.773016 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:39.674452 kubelet[2445]: E0124 00:35:39.674406 2445 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:39.773641 kubelet[2445]: E0124 00:35:39.773572 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:39.835737 kubelet[2445]: E0124 00:35:39.835671 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bqx9m" podUID="dc56aca3-e78d-4c4c-9e51-d34a825d2bbf" Jan 24 00:35:40.773947 kubelet[2445]: E0124 00:35:40.773904 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:41.774736 kubelet[2445]: E0124 00:35:41.774687 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:42.775202 kubelet[2445]: E0124 00:35:42.775134 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:43.776399 kubelet[2445]: E0124 00:35:43.776309 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:44.682925 kubelet[2445]: E0124 00:35:44.682870 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.176?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 24 00:35:44.776522 kubelet[2445]: E0124 00:35:44.776466 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:45.776715 kubelet[2445]: E0124 00:35:45.776638 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:35:46.777135 kubelet[2445]: E0124 00:35:46.777060 2445 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"