Jan 23 01:10:17.885693 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:10:17.885727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:17.885743 kernel: BIOS-provided physical RAM map: Jan 23 01:10:17.885753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:10:17.885762 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 23 01:10:17.885772 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:10:17.885784 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:10:17.885795 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:10:17.885805 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:10:17.885815 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:10:17.885826 kernel: NX (Execute Disable) protection: active Jan 23 01:10:17.885838 kernel: APIC: Static calls initialized Jan 23 01:10:17.885848 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jan 23 01:10:17.885859 kernel: extended physical RAM map: Jan 23 01:10:17.885872 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:10:17.885884 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jan 23 01:10:17.885898 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jan 23 01:10:17.885910 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jan 23 01:10:17.885921 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jan 23 01:10:17.885932 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 23 01:10:17.885944 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 23 01:10:17.885955 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 23 01:10:17.885977 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 23 01:10:17.885988 kernel: efi: EFI v2.7 by EDK II Jan 23 01:10:17.886016 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 23 01:10:17.886029 kernel: secureboot: Secure boot disabled Jan 23 01:10:17.886040 kernel: SMBIOS 2.7 present. Jan 23 01:10:17.886056 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 23 01:10:17.886068 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:10:17.886080 kernel: Hypervisor detected: KVM Jan 23 01:10:17.886091 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:10:17.886103 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:10:17.886115 kernel: kvm-clock: using sched offset of 5050490079 cycles Jan 23 01:10:17.886128 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:10:17.886140 kernel: tsc: Detected 2499.996 MHz processor Jan 23 01:10:17.886152 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:10:17.886164 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:10:17.886178 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 23 01:10:17.886191 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:10:17.886204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:10:17.886221 kernel: Using GB pages for direct mapping Jan 23 01:10:17.886234 kernel: ACPI: Early table checksum verification disabled Jan 23 01:10:17.886247 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 23 01:10:17.886260 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 01:10:17.886275 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 01:10:17.886288 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 01:10:17.886300 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 23 01:10:17.886313 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 23 01:10:17.886326 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 01:10:17.886339 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 01:10:17.886351 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 23 01:10:17.886364 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 23 01:10:17.886380 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:10:17.886392 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 23 01:10:17.886405 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 23 01:10:17.886418 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 23 01:10:17.886431 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 23 01:10:17.886443 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 23 01:10:17.886456 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 23 01:10:17.886469 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 23 01:10:17.886484 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 23 01:10:17.886496 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 23 01:10:17.886509 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 23 01:10:17.886522 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 23 01:10:17.886534 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 23 01:10:17.886547 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 23 01:10:17.886560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 23 01:10:17.886612 kernel: NUMA: Initialized distance table, cnt=1 Jan 23 01:10:17.886624 kernel: NODE_DATA(0) allocated [mem 0x7a8eedc0-0x7a8f5fff] Jan 23 01:10:17.886637 kernel: Zone ranges: Jan 23 01:10:17.886653 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:10:17.886665 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 23 01:10:17.886678 kernel: Normal empty Jan 23 01:10:17.886690 kernel: Device empty Jan 23 01:10:17.886702 kernel: Movable zone start for each node Jan 23 01:10:17.886715 kernel: Early memory node ranges Jan 23 01:10:17.886728 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:10:17.886740 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 23 01:10:17.886753 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 23 01:10:17.886769 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 23 01:10:17.886781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:10:17.886794 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:10:17.886807 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 23 01:10:17.886820 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 23 01:10:17.886833 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 23 01:10:17.886845 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:10:17.886858 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 23 01:10:17.886871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:10:17.886886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:10:17.886899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:10:17.886912 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:10:17.886924 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:10:17.886938 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:10:17.886951 kernel: TSC deadline timer available Jan 23 01:10:17.886963 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:10:17.886976 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:10:17.886989 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:10:17.887001 kernel: CPU topo: Max. threads per core: 2 Jan 23 01:10:17.887016 kernel: CPU topo: Num. cores per package: 1 Jan 23 01:10:17.887029 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:10:17.887041 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:10:17.887054 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:10:17.887068 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 23 01:10:17.887080 kernel: Booting paravirtualized kernel on KVM Jan 23 01:10:17.887093 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:10:17.887106 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:10:17.887119 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:10:17.887135 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:10:17.887148 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:10:17.887161 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:10:17.887174 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:10:17.887188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:17.887201 kernel: random: crng init done Jan 23 01:10:17.887214 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:10:17.887238 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 01:10:17.887253 kernel: Fallback order for Node 0: 0 Jan 23 01:10:17.887272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jan 23 01:10:17.887284 kernel: Policy zone: DMA32 Jan 23 01:10:17.887305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:10:17.887320 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:10:17.887332 kernel: Kernel/User page tables isolation: enabled Jan 23 01:10:17.887384 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:10:17.887399 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:10:17.887410 kernel: Dynamic Preempt: voluntary Jan 23 01:10:17.887422 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:10:17.887437 kernel: rcu: RCU event tracing is enabled. Jan 23 01:10:17.887451 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:10:17.887471 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:10:17.887486 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:10:17.887502 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:10:17.887517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:10:17.887532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:10:17.887552 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:17.887596 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:17.887613 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:10:17.887628 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:10:17.887644 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:10:17.887658 kernel: Console: colour dummy device 80x25 Jan 23 01:10:17.887672 kernel: printk: legacy console [tty0] enabled Jan 23 01:10:17.887687 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:10:17.887702 kernel: ACPI: Core revision 20240827 Jan 23 01:10:17.887720 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 23 01:10:17.887735 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:10:17.887750 kernel: x2apic enabled Jan 23 01:10:17.887764 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:10:17.887777 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 23 01:10:17.887791 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 23 01:10:17.887804 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 23 01:10:17.887818 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 23 01:10:17.887832 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:10:17.887847 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:10:17.887863 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:10:17.887877 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 01:10:17.887892 kernel: RETBleed: Vulnerable Jan 23 01:10:17.887905 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:10:17.887918 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:10:17.887932 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:10:17.887946 kernel: GDS: Unknown: Dependent on hypervisor status Jan 23 01:10:17.887959 kernel: active return thunk: its_return_thunk Jan 23 01:10:17.887973 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 01:10:17.887986 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:10:17.888004 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:10:17.888018 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:10:17.888032 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 23 01:10:17.888047 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 23 01:10:17.888063 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:10:17.888079 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:10:17.888095 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:10:17.888110 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:10:17.888126 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:10:17.888141 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 23 01:10:17.888158 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 23 01:10:17.888176 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 23 01:10:17.888192 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 23 01:10:17.888208 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 23 01:10:17.888225 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 23 01:10:17.888241 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 23 01:10:17.888257 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:10:17.888273 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:10:17.888289 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:10:17.888305 kernel: landlock: Up and running. Jan 23 01:10:17.888320 kernel: SELinux: Initializing. Jan 23 01:10:17.888336 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:10:17.888355 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 01:10:17.888371 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 23 01:10:17.888386 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 23 01:10:17.888402 kernel: signal: max sigframe size: 3632 Jan 23 01:10:17.888418 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:10:17.888432 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:10:17.888446 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:10:17.888462 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:10:17.888476 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:10:17.888490 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:10:17.888507 kernel: .... node #0, CPUs: #1 Jan 23 01:10:17.888522 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 23 01:10:17.888537 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 23 01:10:17.888552 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:10:17.888580 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 23 01:10:17.888594 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 133380K reserved, 0K cma-reserved) Jan 23 01:10:17.888606 kernel: devtmpfs: initialized Jan 23 01:10:17.888618 kernel: x86/mm: Memory block size: 128MB Jan 23 01:10:17.888634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 23 01:10:17.888648 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:10:17.888663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:10:17.888677 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:10:17.888693 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:10:17.888706 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:10:17.888724 kernel: audit: type=2000 audit(1769130615.606:1): state=initialized audit_enabled=0 res=1 Jan 23 01:10:17.888738 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:10:17.888751 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:10:17.888767 kernel: cpuidle: using governor menu Jan 23 01:10:17.888781 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:10:17.888795 kernel: dca service started, version 1.12.1 Jan 23 01:10:17.888808 kernel: PCI: Using configuration type 1 for base access Jan 23 01:10:17.888821 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:10:17.888838 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:10:17.888857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:10:17.888869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:10:17.888882 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:10:17.888898 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:10:17.888911 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:10:17.888922 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:10:17.888934 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 23 01:10:17.888946 kernel: ACPI: Interpreter enabled Jan 23 01:10:17.888959 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:10:17.888971 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:10:17.888984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:10:17.888999 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:10:17.889017 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 23 01:10:17.889031 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:10:17.889245 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:10:17.889390 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 23 01:10:17.889520 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 23 01:10:17.889538 kernel: acpiphp: Slot [3] registered Jan 23 01:10:17.889553 kernel: acpiphp: Slot [4] registered Jan 23 01:10:17.890625 kernel: acpiphp: Slot [5] registered Jan 23 01:10:17.890644 kernel: acpiphp: Slot [6] registered Jan 23 01:10:17.890659 kernel: acpiphp: Slot [7] registered Jan 23 01:10:17.890671 kernel: acpiphp: Slot [8] registered Jan 23 01:10:17.890684 kernel: acpiphp: Slot [9] registered Jan 23 01:10:17.890697 kernel: acpiphp: Slot [10] registered Jan 23 01:10:17.890710 kernel: acpiphp: Slot [11] registered Jan 23 01:10:17.890723 kernel: acpiphp: Slot [12] registered Jan 23 01:10:17.890737 kernel: acpiphp: Slot [13] registered Jan 23 01:10:17.890749 kernel: acpiphp: Slot [14] registered Jan 23 01:10:17.890770 kernel: acpiphp: Slot [15] registered Jan 23 01:10:17.890783 kernel: acpiphp: Slot [16] registered Jan 23 01:10:17.890797 kernel: acpiphp: Slot [17] registered Jan 23 01:10:17.890811 kernel: acpiphp: Slot [18] registered Jan 23 01:10:17.890828 kernel: acpiphp: Slot [19] registered Jan 23 01:10:17.890842 kernel: acpiphp: Slot [20] registered Jan 23 01:10:17.890856 kernel: acpiphp: Slot [21] registered Jan 23 01:10:17.890869 kernel: acpiphp: Slot [22] registered Jan 23 01:10:17.890882 kernel: acpiphp: Slot [23] registered Jan 23 01:10:17.890900 kernel: acpiphp: Slot [24] registered Jan 23 01:10:17.890915 kernel: acpiphp: Slot [25] registered Jan 23 01:10:17.890929 kernel: acpiphp: Slot [26] registered Jan 23 01:10:17.890943 kernel: acpiphp: Slot [27] registered Jan 23 01:10:17.890957 kernel: acpiphp: Slot [28] registered Jan 23 01:10:17.891602 kernel: acpiphp: Slot [29] registered Jan 23 01:10:17.891620 kernel: acpiphp: Slot [30] registered Jan 23 01:10:17.891632 kernel: acpiphp: Slot [31] registered Jan 23 01:10:17.891646 kernel: PCI host bridge to bus 0000:00 Jan 23 01:10:17.891821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:10:17.891944 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:10:17.892055 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:10:17.892164 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 23 01:10:17.892274 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 23 01:10:17.892386 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:10:17.892531 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:10:17.893626 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:10:17.893773 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jan 23 01:10:17.893901 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 23 01:10:17.894060 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 23 01:10:17.894195 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 23 01:10:17.894323 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 23 01:10:17.894466 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 23 01:10:17.895689 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 23 01:10:17.895836 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 23 01:10:17.896010 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:10:17.896146 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jan 23 01:10:17.896273 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:10:17.896399 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:10:17.896539 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jan 23 01:10:17.897461 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jan 23 01:10:17.897619 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jan 23 01:10:17.898224 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jan 23 01:10:17.898254 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:10:17.898272 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:10:17.898289 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:10:17.898314 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:10:17.898330 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 23 01:10:17.898346 kernel: iommu: Default domain type: Translated Jan 23 01:10:17.898362 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:10:17.898379 kernel: efivars: Registered efivars operations Jan 23 01:10:17.898395 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:10:17.898410 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:10:17.898426 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jan 23 01:10:17.898444 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 23 01:10:17.898466 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 23 01:10:17.899665 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 23 01:10:17.899810 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 23 01:10:17.899944 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:10:17.899964 kernel: vgaarb: loaded Jan 23 01:10:17.899981 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 23 01:10:17.899996 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 23 01:10:17.900012 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:10:17.900027 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:10:17.900046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:10:17.900061 kernel: pnp: PnP ACPI init Jan 23 01:10:17.900077 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:10:17.900093 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:10:17.900108 kernel: NET: Registered PF_INET protocol family Jan 23 01:10:17.900124 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:10:17.900140 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 01:10:17.900155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:10:17.900173 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 01:10:17.900189 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 01:10:17.900204 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 01:10:17.900220 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:10:17.900235 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 01:10:17.900251 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:10:17.900266 kernel: NET: Registered PF_XDP protocol family Jan 23 01:10:17.900391 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:10:17.900510 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:10:17.901493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:10:17.901640 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 23 01:10:17.901753 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 23 01:10:17.901889 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 23 01:10:17.901911 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:10:17.901928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 01:10:17.901943 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 23 01:10:17.901960 kernel: clocksource: Switched to clocksource tsc Jan 23 01:10:17.901991 kernel: Initialise system trusted keyrings Jan 23 01:10:17.902022 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 01:10:17.902035 kernel: Key type asymmetric registered Jan 23 01:10:17.902048 kernel: Asymmetric key parser 'x509' registered Jan 23 01:10:17.902061 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:10:17.902075 kernel: io scheduler mq-deadline registered Jan 23 01:10:17.902089 kernel: io scheduler kyber registered Jan 23 01:10:17.902103 kernel: io scheduler bfq registered Jan 23 01:10:17.902118 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:10:17.902136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:10:17.902151 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:10:17.902164 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:10:17.902177 kernel: i8042: Warning: Keylock active Jan 23 01:10:17.904608 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:10:17.904639 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:10:17.904820 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 23 01:10:17.904964 kernel: rtc_cmos 00:00: registered as rtc0 Jan 23 01:10:17.905092 kernel: rtc_cmos 00:00: setting system clock to 2026-01-23T01:10:17 UTC (1769130617) Jan 23 01:10:17.905209 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 23 01:10:17.905245 kernel: intel_pstate: CPU model not supported Jan 23 01:10:17.905262 kernel: efifb: probing for efifb Jan 23 01:10:17.905277 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jan 23 01:10:17.905291 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 23 01:10:17.905306 kernel: efifb: scrolling: redraw Jan 23 01:10:17.905320 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:10:17.905334 kernel: Console: switching to colour frame buffer device 100x37 Jan 23 01:10:17.905350 kernel: fb0: EFI VGA frame buffer device Jan 23 01:10:17.905364 kernel: pstore: Using crash dump compression: deflate Jan 23 01:10:17.905378 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:10:17.905392 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:10:17.905406 kernel: Segment Routing with IPv6 Jan 23 01:10:17.905421 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:10:17.905435 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:10:17.905449 kernel: Key type dns_resolver registered Jan 23 01:10:17.905463 kernel: IPI shorthand broadcast: enabled Jan 23 01:10:17.905480 kernel: sched_clock: Marking stable (2564002008, 173563621)->(2820121559, -82555930) Jan 23 01:10:17.905495 kernel: registered taskstats version 1 Jan 23 01:10:17.905509 kernel: Loading compiled-in X.509 certificates Jan 23 01:10:17.905523 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:10:17.905537 kernel: Demotion targets for Node 0: null Jan 23 01:10:17.905551 kernel: Key type .fscrypt registered Jan 23 01:10:17.905627 kernel: Key type fscrypt-provisioning registered Jan 23 01:10:17.905641 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:10:17.905655 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:10:17.905672 kernel: ima: No architecture policies found Jan 23 01:10:17.905687 kernel: clk: Disabling unused clocks Jan 23 01:10:17.905701 kernel: Warning: unable to open an initial console. Jan 23 01:10:17.905715 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:10:17.905730 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:10:17.905748 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:10:17.905765 kernel: Run /init as init process Jan 23 01:10:17.905779 kernel: with arguments: Jan 23 01:10:17.905794 kernel: /init Jan 23 01:10:17.905807 kernel: with environment: Jan 23 01:10:17.905821 kernel: HOME=/ Jan 23 01:10:17.905837 kernel: TERM=linux Jan 23 01:10:17.905853 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:10:17.905871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:10:17.905890 systemd[1]: Detected virtualization amazon. Jan 23 01:10:17.905904 systemd[1]: Detected architecture x86-64. Jan 23 01:10:17.905918 systemd[1]: Running in initrd. Jan 23 01:10:17.905932 systemd[1]: No hostname configured, using default hostname. Jan 23 01:10:17.905946 systemd[1]: Hostname set to . Jan 23 01:10:17.905970 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:10:17.905985 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:10:17.906020 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:17.906035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:17.906051 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:10:17.906067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:10:17.906082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:10:17.906098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:10:17.906115 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:10:17.906133 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:10:17.906148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:17.906163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:17.906178 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:10:17.906193 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:10:17.906208 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:10:17.906224 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:10:17.906239 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:10:17.906254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:10:17.906273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:10:17.906288 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:10:17.906303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:17.906319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:17.906334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:17.906350 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:10:17.906365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:10:17.906380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:10:17.906399 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:10:17.906415 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:10:17.906431 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:10:17.906446 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:10:17.906461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:10:17.906476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:17.906518 systemd-journald[188]: Collecting audit messages is disabled. Jan 23 01:10:17.906555 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:10:17.909614 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:17.909642 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:10:17.909661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:10:17.909682 systemd-journald[188]: Journal started Jan 23 01:10:17.909720 systemd-journald[188]: Runtime Journal (/run/log/journal/ec28898ebd7ba67bc893337a6075f5b3) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:10:17.921009 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:10:17.923621 systemd-modules-load[189]: Inserted module 'overlay' Jan 23 01:10:17.928132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:17.940485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:10:17.948725 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:10:17.956850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:10:17.961239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:10:17.970679 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:10:17.974878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:10:17.981590 kernel: Bridge firewalling registered Jan 23 01:10:17.981897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:17.983651 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 23 01:10:17.985270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:17.990653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:17.992964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:10:17.997745 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:10:18.002600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:10:18.009025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:18.019664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:18.024729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:10:18.027911 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:10:18.087221 systemd-resolved[237]: Positive Trust Anchors: Jan 23 01:10:18.088138 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:10:18.088201 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:10:18.095248 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 23 01:10:18.098429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:10:18.099136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:18.130614 kernel: SCSI subsystem initialized Jan 23 01:10:18.139591 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:10:18.150599 kernel: iscsi: registered transport (tcp) Jan 23 01:10:18.172733 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:10:18.172812 kernel: QLogic iSCSI HBA Driver Jan 23 01:10:18.191865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:10:18.213003 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:18.215496 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:10:18.261361 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:10:18.263717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:10:18.312617 kernel: raid6: avx512x4 gen() 17717 MB/s Jan 23 01:10:18.330600 kernel: raid6: avx512x2 gen() 17786 MB/s Jan 23 01:10:18.348606 kernel: raid6: avx512x1 gen() 17466 MB/s Jan 23 01:10:18.366602 kernel: raid6: avx2x4 gen() 17626 MB/s Jan 23 01:10:18.384600 kernel: raid6: avx2x2 gen() 17626 MB/s Jan 23 01:10:18.402835 kernel: raid6: avx2x1 gen() 13486 MB/s Jan 23 01:10:18.402904 kernel: raid6: using algorithm avx512x2 gen() 17786 MB/s Jan 23 01:10:18.421791 kernel: raid6: .... xor() 24730 MB/s, rmw enabled Jan 23 01:10:18.421872 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:10:18.442599 kernel: xor: automatically using best checksumming function avx Jan 23 01:10:18.608601 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:10:18.615430 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:10:18.617482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:18.648261 systemd-udevd[439]: Using default interface naming scheme 'v255'. Jan 23 01:10:18.654530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:18.657082 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:10:18.685651 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jan 23 01:10:18.712806 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:10:18.714985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:10:18.772367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:18.777272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:10:18.876720 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:10:18.880963 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 01:10:18.881239 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 01:10:18.891586 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 23 01:10:18.896629 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 01:10:18.898592 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 23 01:10:18.901609 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:10:18.911421 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 01:10:18.911641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:18.911796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:18.916221 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ae:12:4e:fa:b5 Jan 23 01:10:18.916817 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:18.918296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:18.943833 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:10:18.943873 kernel: GPT:9289727 != 33554431 Jan 23 01:10:18.943891 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:10:18.943910 kernel: GPT:9289727 != 33554431 Jan 23 01:10:18.943927 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:10:18.943945 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:18.943967 kernel: AES CTR mode by8 optimization enabled Jan 23 01:10:18.922634 (udev-worker)[483]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:18.943534 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:18.970866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:18.971691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:18.974547 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:18.978949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:18.982599 kernel: nvme nvme0: using unchecked data buffer Jan 23 01:10:19.014998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:19.112653 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 01:10:19.137837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 01:10:19.138790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:10:19.151222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:10:19.161327 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 01:10:19.162006 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 01:10:19.163452 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:10:19.164468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:19.165629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:10:19.167440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:10:19.172744 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:10:19.189543 disk-uuid[680]: Primary Header is updated. Jan 23 01:10:19.189543 disk-uuid[680]: Secondary Entries is updated. Jan 23 01:10:19.189543 disk-uuid[680]: Secondary Header is updated. Jan 23 01:10:19.195300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:10:19.198641 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:19.211636 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:20.221305 disk-uuid[683]: The operation has completed successfully. Jan 23 01:10:20.222073 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 01:10:20.374265 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:10:20.374396 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:10:20.411969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:10:20.426398 sh[948]: Success Jan 23 01:10:20.447191 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:10:20.447281 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:10:20.447303 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:10:20.459585 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jan 23 01:10:20.567093 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:10:20.571688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:10:20.586855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:10:20.607591 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (971) Jan 23 01:10:20.611400 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:10:20.611471 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:20.716042 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:10:20.716115 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:10:20.716129 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:10:20.720403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:10:20.721309 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:10:20.722253 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:10:20.723030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:10:20.724436 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:10:20.757621 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1006) Jan 23 01:10:20.762760 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:20.762845 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:20.782576 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:20.782659 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:20.790695 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:20.791859 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:10:20.794775 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:10:20.828374 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:10:20.830933 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:10:20.884940 systemd-networkd[1140]: lo: Link UP Jan 23 01:10:20.884953 systemd-networkd[1140]: lo: Gained carrier Jan 23 01:10:20.886172 systemd-networkd[1140]: Enumeration completed Jan 23 01:10:20.886752 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:10:20.886866 systemd-networkd[1140]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:20.886871 systemd-networkd[1140]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:10:20.887845 systemd[1]: Reached target network.target - Network. Jan 23 01:10:20.890166 systemd-networkd[1140]: eth0: Link UP Jan 23 01:10:20.890176 systemd-networkd[1140]: eth0: Gained carrier Jan 23 01:10:20.890190 systemd-networkd[1140]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:20.907663 systemd-networkd[1140]: eth0: DHCPv4 address 172.31.20.229/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:10:21.084504 ignition[1097]: Ignition 2.22.0 Jan 23 01:10:21.084521 ignition[1097]: Stage: fetch-offline Jan 23 01:10:21.084726 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:21.084734 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:21.085011 ignition[1097]: Ignition finished successfully Jan 23 01:10:21.088226 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:10:21.089624 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:10:21.115973 ignition[1150]: Ignition 2.22.0 Jan 23 01:10:21.115989 ignition[1150]: Stage: fetch Jan 23 01:10:21.116351 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:21.116363 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:21.116462 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:21.152659 ignition[1150]: PUT result: OK Jan 23 01:10:21.156406 ignition[1150]: parsed url from cmdline: "" Jan 23 01:10:21.156418 ignition[1150]: no config URL provided Jan 23 01:10:21.156429 ignition[1150]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:10:21.156456 ignition[1150]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:10:21.156487 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:21.158920 ignition[1150]: PUT result: OK Jan 23 01:10:21.158998 ignition[1150]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 01:10:21.160662 ignition[1150]: GET result: OK Jan 23 01:10:21.160733 ignition[1150]: parsing config with SHA512: 7ed2753c59b8bce434f3bf2453a63c49dcce55593440e95257de90c0d160f0341d8a36dd9306775c58fd28a9a1eff26864dc0b4d94910ef5a7408ce1d555af1b Jan 23 01:10:21.164087 unknown[1150]: fetched base config from "system" Jan 23 01:10:21.164102 unknown[1150]: fetched base config from "system" Jan 23 01:10:21.164431 ignition[1150]: fetch: fetch complete Jan 23 01:10:21.164109 unknown[1150]: fetched user config from "aws" Jan 23 01:10:21.164439 ignition[1150]: fetch: fetch passed Jan 23 01:10:21.164500 ignition[1150]: Ignition finished successfully Jan 23 01:10:21.167825 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:10:21.169319 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:10:21.202676 ignition[1156]: Ignition 2.22.0 Jan 23 01:10:21.202689 ignition[1156]: Stage: kargs Jan 23 01:10:21.203107 ignition[1156]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:21.203120 ignition[1156]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:21.203235 ignition[1156]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:21.204169 ignition[1156]: PUT result: OK Jan 23 01:10:21.207091 ignition[1156]: kargs: kargs passed Jan 23 01:10:21.207154 ignition[1156]: Ignition finished successfully Jan 23 01:10:21.208947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:10:21.210761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:10:21.243774 ignition[1163]: Ignition 2.22.0 Jan 23 01:10:21.243789 ignition[1163]: Stage: disks Jan 23 01:10:21.244170 ignition[1163]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:21.244183 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:21.244297 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:21.246321 ignition[1163]: PUT result: OK Jan 23 01:10:21.248830 ignition[1163]: disks: disks passed Jan 23 01:10:21.248886 ignition[1163]: Ignition finished successfully Jan 23 01:10:21.251391 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:10:21.252264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:10:21.252681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:10:21.253220 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:10:21.253715 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:10:21.254418 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:10:21.255972 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:10:21.292809 systemd-fsck[1171]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:10:21.295638 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:10:21.297510 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:10:21.457616 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:10:21.458054 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:10:21.459024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:10:21.461116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:10:21.463652 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:10:21.465841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:10:21.465896 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:10:21.465922 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:10:21.472011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:10:21.474707 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:10:21.488593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1190) Jan 23 01:10:21.491856 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:21.491925 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:21.500925 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:21.500999 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:21.503881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:10:21.667239 initrd-setup-root[1214]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:10:21.685820 initrd-setup-root[1221]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:10:21.691809 initrd-setup-root[1228]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:10:21.696916 initrd-setup-root[1235]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:10:21.927304 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:10:21.929340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:10:21.932707 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:10:21.947756 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:10:21.949871 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:21.977618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:10:21.988588 ignition[1302]: INFO : Ignition 2.22.0 Jan 23 01:10:21.988588 ignition[1302]: INFO : Stage: mount Jan 23 01:10:21.988588 ignition[1302]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:21.988588 ignition[1302]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:21.991241 ignition[1302]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:21.991241 ignition[1302]: INFO : PUT result: OK Jan 23 01:10:21.993452 ignition[1302]: INFO : mount: mount passed Jan 23 01:10:21.994087 ignition[1302]: INFO : Ignition finished successfully Jan 23 01:10:21.995637 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:10:21.997090 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:10:22.018483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:10:22.056082 systemd-networkd[1140]: eth0: Gained IPv6LL Jan 23 01:10:22.057227 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1315) Jan 23 01:10:22.060610 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:10:22.060685 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:10:22.068789 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 01:10:22.068861 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 01:10:22.072099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:10:22.109719 ignition[1331]: INFO : Ignition 2.22.0 Jan 23 01:10:22.109719 ignition[1331]: INFO : Stage: files Jan 23 01:10:22.111150 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:22.111150 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:22.111150 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:22.112285 ignition[1331]: INFO : PUT result: OK Jan 23 01:10:22.114112 ignition[1331]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:10:22.115165 ignition[1331]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:10:22.115165 ignition[1331]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:10:22.127488 ignition[1331]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:10:22.128259 ignition[1331]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:10:22.128259 ignition[1331]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:10:22.128088 unknown[1331]: wrote ssh authorized keys file for user: core Jan 23 01:10:22.132548 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:10:22.133327 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:10:22.137509 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:10:22.138540 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:10:22.138540 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:10:22.140249 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:10:22.141364 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:10:22.141364 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 01:10:22.442219 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 01:10:22.916318 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 01:10:22.917348 ignition[1331]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:10:22.917348 ignition[1331]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:10:22.917348 ignition[1331]: INFO : files: files passed Jan 23 01:10:22.917348 ignition[1331]: INFO : Ignition finished successfully Jan 23 01:10:22.919090 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:10:22.921056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:10:22.922673 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:10:22.930666 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:10:22.931253 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:10:22.943413 initrd-setup-root-after-ignition[1361]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:22.943413 initrd-setup-root-after-ignition[1361]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:22.947469 initrd-setup-root-after-ignition[1365]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:10:22.949457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:10:22.951125 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:10:22.952475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:10:23.016023 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:10:23.016156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:10:23.017818 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:10:23.018531 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:10:23.019338 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:10:23.020264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:10:23.045709 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:10:23.048830 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:10:23.072703 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:23.073383 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:23.074676 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:10:23.075527 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:10:23.075782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:10:23.076876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:10:23.077772 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:10:23.078736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:10:23.079480 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:10:23.080204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:10:23.080991 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:10:23.081753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:10:23.082718 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:10:23.083485 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:10:23.084668 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:10:23.085417 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:10:23.086287 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:10:23.086511 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:10:23.087524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:23.088341 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:23.088994 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:10:23.089143 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:23.089800 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:10:23.090108 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:10:23.091474 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:10:23.091725 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:10:23.092428 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:10:23.092649 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:10:23.095696 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:10:23.099747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:10:23.100483 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:10:23.100756 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:23.103930 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:10:23.104150 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:10:23.110304 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:10:23.112775 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:10:23.141108 ignition[1385]: INFO : Ignition 2.22.0 Jan 23 01:10:23.141108 ignition[1385]: INFO : Stage: umount Jan 23 01:10:23.141108 ignition[1385]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:10:23.141108 ignition[1385]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 01:10:23.141108 ignition[1385]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 01:10:23.145407 ignition[1385]: INFO : PUT result: OK Jan 23 01:10:23.143081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:10:23.146931 ignition[1385]: INFO : umount: umount passed Jan 23 01:10:23.147535 ignition[1385]: INFO : Ignition finished successfully Jan 23 01:10:23.151050 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:10:23.151204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:10:23.152403 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:10:23.152539 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:10:23.153680 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:10:23.153796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:10:23.154673 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:10:23.154736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:10:23.155313 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:10:23.155371 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:10:23.155981 systemd[1]: Stopped target network.target - Network. Jan 23 01:10:23.156560 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:10:23.156771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:10:23.157272 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:10:23.157861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:10:23.161736 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:23.162998 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:10:23.163425 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:10:23.164132 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:10:23.164192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:10:23.164825 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:10:23.164880 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:10:23.165449 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:10:23.165535 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:10:23.166306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:10:23.166370 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:10:23.166983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:10:23.167050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:10:23.167821 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:10:23.168412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:10:23.172810 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:10:23.172965 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:10:23.177034 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:10:23.177366 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:10:23.177503 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:10:23.180011 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:10:23.180867 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:10:23.181511 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:10:23.181629 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:23.183416 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:10:23.184021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:10:23.184089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:10:23.184718 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:10:23.184770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:23.187702 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:10:23.187774 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:23.188487 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:10:23.188545 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:23.190824 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:23.197615 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:10:23.197719 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:23.206197 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:10:23.211028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:23.212925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:10:23.213035 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:23.215270 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:10:23.215308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:23.215780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:10:23.215849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:10:23.216944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:10:23.217006 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:10:23.218126 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:10:23.218197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:10:23.220411 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:10:23.221192 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:10:23.221260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:23.224318 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:10:23.224385 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:23.226780 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:10:23.226843 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:10:23.227673 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:10:23.227731 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:23.228851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:23.228909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:23.231617 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:10:23.231692 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:10:23.231742 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:10:23.231793 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:10:23.232253 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:10:23.232384 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:10:23.240335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:10:23.240435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:10:23.241266 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:10:23.243087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:10:23.265387 systemd[1]: Switching root. Jan 23 01:10:23.301745 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 23 01:10:23.301812 systemd-journald[188]: Journal stopped Jan 23 01:10:24.792261 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:10:24.792343 kernel: SELinux: policy capability open_perms=1 Jan 23 01:10:24.792369 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:10:24.792392 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:10:24.792411 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:10:24.792429 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:10:24.792449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:10:24.792469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:10:24.792494 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:10:24.792513 kernel: audit: type=1403 audit(1769130623.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:10:24.792539 systemd[1]: Successfully loaded SELinux policy in 68.865ms. Jan 23 01:10:24.794656 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.248ms. Jan 23 01:10:24.794696 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:10:24.794715 systemd[1]: Detected virtualization amazon. Jan 23 01:10:24.794735 systemd[1]: Detected architecture x86-64. Jan 23 01:10:24.794754 systemd[1]: Detected first boot. Jan 23 01:10:24.794775 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:10:24.794794 kernel: Guest personality initialized and is inactive Jan 23 01:10:24.794816 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:10:24.794841 kernel: Initialized host personality Jan 23 01:10:24.794863 zram_generator::config[1429]: No configuration found. Jan 23 01:10:24.794885 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:10:24.794904 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:10:24.794927 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:10:24.794946 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:10:24.794966 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:10:24.794992 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:24.795020 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:10:24.795041 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:10:24.795061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:10:24.795083 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:10:24.795103 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:10:24.795124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:10:24.795155 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:10:24.795174 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:10:24.795197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:10:24.795217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:10:24.795237 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:10:24.795255 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:10:24.795273 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:10:24.795290 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:10:24.795308 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:10:24.795325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:10:24.795348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:10:24.795366 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:10:24.795385 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:10:24.795403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:10:24.795420 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:10:24.795436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:10:24.795453 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:10:24.795471 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:10:24.795488 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:10:24.795510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:10:24.795526 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:10:24.795546 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:10:24.796620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:10:24.796656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:10:24.796677 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:10:24.796699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:10:24.796720 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:10:24.796743 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:10:24.796769 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:10:24.796790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:24.796812 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:10:24.796833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:10:24.796855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:10:24.796880 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:10:24.796903 systemd[1]: Reached target machines.target - Containers. Jan 23 01:10:24.796921 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:10:24.796940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:24.796963 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:10:24.796982 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:10:24.797001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:10:24.797021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:10:24.797041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:10:24.797061 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:10:24.797080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:10:24.797100 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:10:24.797123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:10:24.797143 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:10:24.797162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:10:24.797181 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:10:24.797202 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:24.797222 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:10:24.797246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:10:24.797272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:10:24.797292 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:10:24.797311 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:10:24.797331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:10:24.797354 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:10:24.797373 systemd[1]: Stopped verity-setup.service. Jan 23 01:10:24.797396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:24.797417 kernel: loop: module loaded Jan 23 01:10:24.797436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:10:24.797454 kernel: fuse: init (API version 7.41) Jan 23 01:10:24.797473 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:10:24.797490 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:10:24.797509 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:10:24.797526 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:10:24.797544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:10:24.798184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:10:24.798214 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:10:24.798276 systemd-journald[1512]: Collecting audit messages is disabled. Jan 23 01:10:24.798319 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:10:24.798337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:10:24.798355 systemd-journald[1512]: Journal started Jan 23 01:10:24.798389 systemd-journald[1512]: Runtime Journal (/run/log/journal/ec28898ebd7ba67bc893337a6075f5b3) is 4.7M, max 38.1M, 33.3M free. Jan 23 01:10:24.465462 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:10:24.798754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:10:24.475148 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 01:10:24.475659 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:10:24.803956 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:10:24.806476 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:10:24.810880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:10:24.815125 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:10:24.815369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:10:24.817066 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:10:24.817268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:10:24.819483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:10:24.821162 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:10:24.837048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:10:24.841551 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:10:24.850713 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:10:24.860345 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:10:24.861675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:10:24.861729 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:10:24.871481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:10:24.886653 kernel: ACPI: bus type drm_connector registered Jan 23 01:10:24.887063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:10:24.888764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:24.893281 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:10:24.897796 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:10:24.899696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:10:24.906775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:10:24.908720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:10:24.911759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:10:24.917534 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:10:24.924483 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:10:24.929853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:10:24.931311 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:10:24.933629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:10:24.934990 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:10:24.941145 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:10:24.941991 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:10:24.957870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:10:24.960927 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:10:24.972855 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:10:24.990858 systemd-journald[1512]: Time spent on flushing to /var/log/journal/ec28898ebd7ba67bc893337a6075f5b3 is 126.897ms for 1010 entries. Jan 23 01:10:24.990858 systemd-journald[1512]: System Journal (/var/log/journal/ec28898ebd7ba67bc893337a6075f5b3) is 8M, max 195.6M, 187.6M free. Jan 23 01:10:25.161847 systemd-journald[1512]: Received client request to flush runtime journal. Jan 23 01:10:25.161943 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 01:10:25.161979 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:10:25.162006 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 01:10:25.020184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:10:25.055180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:10:25.073014 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Jan 23 01:10:25.073037 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Jan 23 01:10:25.094411 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:10:25.098729 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:10:25.165789 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:10:25.168713 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:10:25.196759 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:10:25.199829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:10:25.210598 kernel: loop2: detected capacity change from 0 to 219144 Jan 23 01:10:25.242654 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 23 01:10:25.243079 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 23 01:10:25.248529 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:10:25.261635 kernel: loop3: detected capacity change from 0 to 72368 Jan 23 01:10:25.347599 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 01:10:25.378637 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 01:10:25.402819 kernel: loop6: detected capacity change from 0 to 219144 Jan 23 01:10:25.454595 kernel: loop7: detected capacity change from 0 to 72368 Jan 23 01:10:25.478555 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:10:25.491277 (sd-merge)[1593]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 01:10:25.498007 (sd-merge)[1593]: Merged extensions into '/usr'. Jan 23 01:10:25.503879 systemd[1]: Reload requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:10:25.504047 systemd[1]: Reloading... Jan 23 01:10:25.658595 zram_generator::config[1619]: No configuration found. Jan 23 01:10:26.015051 systemd[1]: Reloading finished in 510 ms. Jan 23 01:10:26.034094 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:10:26.041588 systemd[1]: Starting ensure-sysext.service... Jan 23 01:10:26.043719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:10:26.073140 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:10:26.073496 systemd[1]: Reload requested from client PID 1670 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:10:26.073509 systemd[1]: Reloading... Jan 23 01:10:26.075212 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:10:26.076628 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:10:26.076984 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:10:26.079557 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:10:26.079829 systemd-tmpfiles[1671]: ACLs are not supported, ignoring. Jan 23 01:10:26.079881 systemd-tmpfiles[1671]: ACLs are not supported, ignoring. Jan 23 01:10:26.084737 systemd-tmpfiles[1671]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:10:26.085266 systemd-tmpfiles[1671]: Skipping /boot Jan 23 01:10:26.095351 systemd-tmpfiles[1671]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:10:26.095373 systemd-tmpfiles[1671]: Skipping /boot Jan 23 01:10:26.155594 zram_generator::config[1700]: No configuration found. Jan 23 01:10:26.191735 ldconfig[1555]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:10:26.361191 systemd[1]: Reloading finished in 287 ms. Jan 23 01:10:26.383998 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:10:26.384881 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:10:26.402352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:10:26.413118 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:26.419896 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:10:26.428886 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:10:26.434883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:10:26.437732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:10:26.440806 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:10:26.447310 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.448164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:26.451231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:10:26.457138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:10:26.464994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:10:26.466775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:26.466978 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:26.467127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.478488 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.479397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:26.480584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:26.480864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:26.481129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.495915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.497063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:10:26.505489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:10:26.506825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:10:26.507025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:10:26.507320 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:10:26.509139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:10:26.519343 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:10:26.521466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:10:26.522436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:10:26.527986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:10:26.528238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:10:26.538457 systemd[1]: Finished ensure-sysext.service. Jan 23 01:10:26.540088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:10:26.541169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:10:26.543731 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:10:26.543964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:10:26.555672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:10:26.558792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:10:26.558902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:10:26.561862 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:10:26.565796 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:10:26.573776 systemd-udevd[1758]: Using default interface naming scheme 'v255'. Jan 23 01:10:26.601035 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:10:26.613712 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:10:26.614744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:10:26.621831 augenrules[1794]: No rules Jan 23 01:10:26.623246 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:26.624093 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:26.637403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:10:26.642773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:10:26.660388 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:10:26.801860 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:10:26.820755 (udev-worker)[1820]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:10:26.974107 systemd-networkd[1801]: lo: Link UP Jan 23 01:10:26.974131 systemd-networkd[1801]: lo: Gained carrier Jan 23 01:10:26.978151 systemd-networkd[1801]: Enumeration completed Jan 23 01:10:26.980756 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:10:26.981553 systemd-networkd[1801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:26.985056 systemd-networkd[1801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:10:26.986274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:10:26.992879 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:10:27.004552 systemd-networkd[1801]: eth0: Link UP Jan 23 01:10:27.004807 systemd-networkd[1801]: eth0: Gained carrier Jan 23 01:10:27.004841 systemd-networkd[1801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:10:27.018755 systemd-networkd[1801]: eth0: DHCPv4 address 172.31.20.229/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 01:10:27.032829 systemd-resolved[1757]: Positive Trust Anchors: Jan 23 01:10:27.033211 systemd-resolved[1757]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:10:27.033342 systemd-resolved[1757]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:10:27.042213 systemd-resolved[1757]: Defaulting to hostname 'linux'. Jan 23 01:10:27.049584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:10:27.051125 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:10:27.052527 systemd[1]: Reached target network.target - Network. Jan 23 01:10:27.053788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:10:27.054652 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:10:27.055829 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:10:27.056374 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:10:27.057482 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:10:27.058233 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:10:27.059316 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:10:27.060647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:10:27.061137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:10:27.061179 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:10:27.061975 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:10:27.063877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:10:27.067681 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:10:27.072116 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:10:27.074093 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:10:27.074749 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:10:27.086382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:10:27.089162 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:10:27.092812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:10:27.112024 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:10:27.112658 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:10:27.113280 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:10:27.113323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:10:27.114765 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:10:27.119748 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:10:27.122787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:10:27.126828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:10:27.128757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:10:27.133553 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:10:27.134252 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:10:27.140264 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:10:27.142421 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:10:27.154227 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:27.160778 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 01:10:27.168587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:10:27.180774 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:10:27.190636 jq[1925]: false Jan 23 01:10:27.194516 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:10:27.199485 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:10:27.212042 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:10:27.226921 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:10:27.234166 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Refreshing passwd entry cache Jan 23 01:10:27.240731 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:10:27.245829 oslogin_cache_refresh[1927]: Refreshing passwd entry cache Jan 23 01:10:27.250645 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:10:27.251694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:10:27.251978 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:10:27.271272 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Failure getting users, quitting Jan 23 01:10:27.271272 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:10:27.271272 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Refreshing group entry cache Jan 23 01:10:27.270809 oslogin_cache_refresh[1927]: Failure getting users, quitting Jan 23 01:10:27.270835 oslogin_cache_refresh[1927]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:10:27.270897 oslogin_cache_refresh[1927]: Refreshing group entry cache Jan 23 01:10:27.281070 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Failure getting groups, quitting Jan 23 01:10:27.281070 google_oslogin_nss_cache[1927]: oslogin_cache_refresh[1927]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:10:27.280802 oslogin_cache_refresh[1927]: Failure getting groups, quitting Jan 23 01:10:27.280818 oslogin_cache_refresh[1927]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:10:27.283723 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:10:27.284941 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:10:27.286088 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:10:27.286377 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:10:27.297679 extend-filesystems[1926]: Found /dev/nvme0n1p6 Jan 23 01:10:27.318383 update_engine[1938]: I20260123 01:10:27.317800 1938 main.cc:92] Flatcar Update Engine starting Jan 23 01:10:27.334540 extend-filesystems[1926]: Found /dev/nvme0n1p9 Jan 23 01:10:27.345392 jq[1942]: true Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: ---------------------------------------------------- Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: corporation. Support and training for ntp-4 are Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: available at https://www.nwtime.org/support Jan 23 01:10:27.346749 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: ---------------------------------------------------- Jan 23 01:10:27.347213 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Jan 23 01:10:27.374236 kernel: ntpd[1930]: segfault at 24 ip 000055bb03879aeb sp 00007ffd2e448fe0 error 4 in ntpd[68aeb,55bb03817000+80000] likely on CPU 1 (core 0, socket 0) Jan 23 01:10:27.374278 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:10:27.345269 ntpd[1930]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:27.364528 systemd-coredump[1974]: Process 1930 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: proto: precision = 0.075 usec (-24) Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: basedate set to 2026-01-10 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: bind(21) AF_INET6 [fe80::4ae:12ff:fe4e:fab5%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:27.375298 ntpd[1930]: 23 Jan 01:10:27 ntpd[1930]: unable to create socket on eth0 (5) for [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:27.345335 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:27.375752 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 01:10:27.345346 ntpd[1930]: ---------------------------------------------------- Jan 23 01:10:27.345356 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:27.345364 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:27.345372 ntpd[1930]: corporation. Support and training for ntp-4 are Jan 23 01:10:27.345381 ntpd[1930]: available at https://www.nwtime.org/support Jan 23 01:10:27.345390 ntpd[1930]: ---------------------------------------------------- Jan 23 01:10:27.352202 ntpd[1930]: proto: precision = 0.075 usec (-24) Jan 23 01:10:27.355182 ntpd[1930]: basedate set to 2026-01-10 Jan 23 01:10:27.355251 ntpd[1930]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:27.381843 systemd[1]: Started systemd-coredump@0-1974-0.service - Process Core Dump (PID 1974/UID 0). Jan 23 01:10:27.401967 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Jan 23 01:10:27.355398 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:27.383120 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:10:27.406762 extend-filesystems[1979]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:10:27.418608 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 01:10:27.355430 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:27.390915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:10:27.355666 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:27.390950 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:10:27.355696 ntpd[1930]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:27.392274 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:10:27.355728 ntpd[1930]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:27.392300 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:10:27.355766 ntpd[1930]: bind(21) AF_INET6 [fe80::4ae:12ff:fe4e:fab5%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:27.420054 (ntainerd)[1969]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:10:27.355789 ntpd[1930]: unable to create socket on eth0 (5) for [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:27.380265 dbus-daemon[1923]: [system] SELinux support is enabled Jan 23 01:10:27.440319 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:10:27.441513 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:10:27.443996 dbus-daemon[1923]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1801 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:10:27.452767 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:10:27.454887 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:10:27.461496 update_engine[1938]: I20260123 01:10:27.461428 1938 update_check_scheduler.cc:74] Next update check in 11m0s Jan 23 01:10:27.462632 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:10:27.467868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:10:27.504102 jq[1971]: true Jan 23 01:10:27.511775 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 01:10:27.554293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 01:10:27.557783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:10:27.609417 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 01:10:27.609504 coreos-metadata[1922]: Jan 23 01:10:27.604 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:10:27.609504 coreos-metadata[1922]: Jan 23 01:10:27.605 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.614 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.618 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.618 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.624 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.639 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.639 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.651 INFO Fetch failed with 404: resource not found Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.652 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.657 INFO Fetch successful Jan 23 01:10:27.659085 coreos-metadata[1922]: Jan 23 01:10:27.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 01:10:27.660199 coreos-metadata[1922]: Jan 23 01:10:27.659 INFO Fetch successful Jan 23 01:10:27.660199 coreos-metadata[1922]: Jan 23 01:10:27.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 01:10:27.661996 coreos-metadata[1922]: Jan 23 01:10:27.661 INFO Fetch successful Jan 23 01:10:27.661996 coreos-metadata[1922]: Jan 23 01:10:27.661 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 01:10:27.665597 coreos-metadata[1922]: Jan 23 01:10:27.662 INFO Fetch successful Jan 23 01:10:27.665347 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:10:27.665778 extend-filesystems[1979]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 01:10:27.665778 extend-filesystems[1979]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 01:10:27.665778 extend-filesystems[1979]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 01:10:27.681065 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Jan 23 01:10:27.666328 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:10:27.709907 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:10:27.757666 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:10:27.762896 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:10:27.762987 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 23 01:10:27.763011 kernel: ACPI: button: Sleep Button [SLPF] Jan 23 01:10:27.797629 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 23 01:10:27.842133 bash[2024]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:10:27.842400 systemd-logind[1934]: New seat seat0. Jan 23 01:10:27.844383 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:10:27.845653 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:10:27.851304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:10:27.855707 systemd[1]: Starting sshkeys.service... Jan 23 01:10:27.856551 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:10:27.886264 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:10:27.890831 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:10:27.980590 coreos-metadata[2035]: Jan 23 01:10:27.980 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 01:10:27.981100 coreos-metadata[2035]: Jan 23 01:10:27.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 01:10:27.981936 coreos-metadata[2035]: Jan 23 01:10:27.981 INFO Fetch successful Jan 23 01:10:27.982013 coreos-metadata[2035]: Jan 23 01:10:27.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 01:10:27.982611 coreos-metadata[2035]: Jan 23 01:10:27.982 INFO Fetch successful Jan 23 01:10:27.985672 unknown[2035]: wrote ssh authorized keys file for user: core Jan 23 01:10:28.017752 locksmithd[1984]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:10:28.054832 update-ssh-keys[2043]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:10:28.054188 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:10:28.063642 systemd[1]: Finished sshkeys.service. Jan 23 01:10:28.116641 systemd-coredump[1976]: Process 1930 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1930: #0 0x000055bb03879aeb n/a (ntpd + 0x68aeb) #1 0x000055bb03822cdf n/a (ntpd + 0x11cdf) #2 0x000055bb03823575 n/a (ntpd + 0x12575) #3 0x000055bb0381ed8a n/a (ntpd + 0xdd8a) #4 0x000055bb038205d3 n/a (ntpd + 0xf5d3) #5 0x000055bb03828fd1 n/a (ntpd + 0x17fd1) #6 0x000055bb03819c2d n/a (ntpd + 0x8c2d) #7 0x00007f8b1b84a16c n/a (libc.so.6 + 0x2716c) #8 0x00007f8b1b84a229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055bb03819c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:10:28.118525 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:10:28.118747 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:10:28.125364 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:10:28.133718 dbus-daemon[1923]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:10:28.134705 dbus-daemon[1923]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1983 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:10:28.139602 systemd[1]: systemd-coredump@0-1974-0.service: Deactivated successfully. Jan 23 01:10:28.161169 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:10:28.196722 containerd[1969]: time="2026-01-23T01:10:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:10:28.203745 containerd[1969]: time="2026-01-23T01:10:28.203620954Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:10:28.210334 sshd_keygen[1963]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:10:28.224757 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 01:10:28.229778 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:28.242398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:28.244049 containerd[1969]: time="2026-01-23T01:10:28.243998587Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.7µs" Jan 23 01:10:28.247935 containerd[1969]: time="2026-01-23T01:10:28.247880353Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:10:28.248067 containerd[1969]: time="2026-01-23T01:10:28.248047807Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:10:28.248505 containerd[1969]: time="2026-01-23T01:10:28.248476507Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:10:28.248898 containerd[1969]: time="2026-01-23T01:10:28.248871875Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:10:28.249089 containerd[1969]: time="2026-01-23T01:10:28.249068255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:10:28.249268 containerd[1969]: time="2026-01-23T01:10:28.249246811Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:10:28.249445 containerd[1969]: time="2026-01-23T01:10:28.249427482Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271288283Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271335429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271358392Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271373616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271509133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271771388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271810989Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271825872Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.271870491Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.272182623Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:10:28.272454 containerd[1969]: time="2026-01-23T01:10:28.272253437Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:10:28.280998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:10:28.281280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283596908Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283699329Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283727456Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283792592Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283810602Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283828832Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283848714Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283865522Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283882702Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283897527Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283913462Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.283934231Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.284087761Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:10:28.284993 containerd[1969]: time="2026-01-23T01:10:28.284118885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284142281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284158684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284175689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284190316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284206629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284222065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284239589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284255288Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284270289Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284331749Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284350854Z" level=info msg="Start snapshots syncer" Jan 23 01:10:28.285517 containerd[1969]: time="2026-01-23T01:10:28.284404435Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:10:28.286070 containerd[1969]: time="2026-01-23T01:10:28.284847323Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:10:28.286070 containerd[1969]: time="2026-01-23T01:10:28.284923088Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286257653Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286460751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286499149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286518218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286538614Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286556819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286588958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286604655Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286638928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286654932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286670875Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286727427Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286752716Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:10:28.287178 containerd[1969]: time="2026-01-23T01:10:28.286822766Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:10:28.286831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286839157Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286851355Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286865596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286896265Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286919525Z" level=info msg="runtime interface created" Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286927740Z" level=info msg="created NRI interface" Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286939869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286957115Z" level=info msg="Connect containerd service" Jan 23 01:10:28.288380 containerd[1969]: time="2026-01-23T01:10:28.286988113Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:10:28.289190 containerd[1969]: time="2026-01-23T01:10:28.289161616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:10:28.294060 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:10:28.328253 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:10:28.353719 systemd-logind[1934]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:10:28.361127 systemd-logind[1934]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:10:28.362154 systemd-logind[1934]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 23 01:10:28.370365 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:10:28.370717 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:10:28.375370 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:10:28.451975 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:10:28.455083 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:10:28.460680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:10:28.461540 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:10:28.515336 ntpd[2065]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: ---------------------------------------------------- Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: corporation. Support and training for ntp-4 are Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: available at https://www.nwtime.org/support Jan 23 01:10:28.516174 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: ---------------------------------------------------- Jan 23 01:10:28.515413 ntpd[2065]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:28.515423 ntpd[2065]: ---------------------------------------------------- Jan 23 01:10:28.515433 ntpd[2065]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:28.515442 ntpd[2065]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:28.515451 ntpd[2065]: corporation. Support and training for ntp-4 are Jan 23 01:10:28.515460 ntpd[2065]: available at https://www.nwtime.org/support Jan 23 01:10:28.515469 ntpd[2065]: ---------------------------------------------------- Jan 23 01:10:28.517112 ntpd[2065]: proto: precision = 0.064 usec (-24) Jan 23 01:10:28.517209 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: proto: precision = 0.064 usec (-24) Jan 23 01:10:28.517371 ntpd[2065]: basedate set to 2026-01-10 Jan 23 01:10:28.517698 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: basedate set to 2026-01-10 Jan 23 01:10:28.517698 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:28.517388 ntpd[2065]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:28.518501 ntpd[2065]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: bind(21) AF_INET6 [fe80::4ae:12ff:fe4e:fab5%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:28.519003 ntpd[2065]: 23 Jan 01:10:28 ntpd[2065]: unable to create socket on eth0 (5) for [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:28.520180 kernel: ntpd[2065]: segfault at 24 ip 0000560abb683aeb sp 00007ffce60d5210 error 4 in ntpd[68aeb,560abb621000+80000] likely on CPU 0 (core 0, socket 0) Jan 23 01:10:28.518547 ntpd[2065]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:28.518767 ntpd[2065]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:28.518797 ntpd[2065]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:28.518830 ntpd[2065]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:28.518861 ntpd[2065]: bind(21) AF_INET6 [fe80::4ae:12ff:fe4e:fab5%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 01:10:28.518883 ntpd[2065]: unable to create socket on eth0 (5) for [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:28.524417 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Jan 23 01:10:28.540329 systemd-coredump[2136]: Process 2065 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 01:10:28.548862 systemd[1]: Started systemd-coredump@1-2136-0.service - Process Core Dump (PID 2136/UID 0). Jan 23 01:10:28.563591 containerd[1969]: time="2026-01-23T01:10:28.562977891Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:10:28.563794 containerd[1969]: time="2026-01-23T01:10:28.563770961Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:10:28.563974 containerd[1969]: time="2026-01-23T01:10:28.563876160Z" level=info msg="Start subscribing containerd event" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565625113Z" level=info msg="Start recovering state" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565765217Z" level=info msg="Start event monitor" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565786077Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565818959Z" level=info msg="Start streaming server" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565838644Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565849145Z" level=info msg="runtime interface starting up..." Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565858180Z" level=info msg="starting plugins..." Jan 23 01:10:28.566038 containerd[1969]: time="2026-01-23T01:10:28.565884181Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:10:28.568515 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:10:28.570630 containerd[1969]: time="2026-01-23T01:10:28.570106607Z" level=info msg="containerd successfully booted in 0.373992s" Jan 23 01:10:28.575657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:10:28.775710 systemd-networkd[1801]: eth0: Gained IPv6LL Jan 23 01:10:28.790882 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:10:28.796360 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:10:28.800866 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 01:10:28.805898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:28.810957 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:10:28.844474 polkitd[2057]: Started polkitd version 126 Jan 23 01:10:28.876265 polkitd[2057]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:10:28.886078 polkitd[2057]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:10:28.886156 polkitd[2057]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:10:28.886646 polkitd[2057]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:10:28.886685 polkitd[2057]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:10:28.886730 polkitd[2057]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:10:28.895005 polkitd[2057]: Finished loading, compiling and executing 2 rules Jan 23 01:10:28.895343 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:10:28.899170 dbus-daemon[1923]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:10:28.899875 polkitd[2057]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:10:28.930063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:10:28.936823 systemd-hostnamed[1983]: Hostname set to (transient) Jan 23 01:10:28.937452 systemd-resolved[1757]: System hostname changed to 'ip-172-31-20-229'. Jan 23 01:10:28.941225 systemd-coredump[2138]: Process 2065 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2065: #0 0x0000560abb683aeb n/a (ntpd + 0x68aeb) #1 0x0000560abb62ccdf n/a (ntpd + 0x11cdf) #2 0x0000560abb62d575 n/a (ntpd + 0x12575) #3 0x0000560abb628d8a n/a (ntpd + 0xdd8a) #4 0x0000560abb62a5d3 n/a (ntpd + 0xf5d3) #5 0x0000560abb632fd1 n/a (ntpd + 0x17fd1) #6 0x0000560abb623c2d n/a (ntpd + 0x8c2d) #7 0x00007fe69a11316c n/a (libc.so.6 + 0x2716c) #8 0x00007fe69a113229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000560abb623c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Jan 23 01:10:28.943180 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 01:10:28.943364 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 01:10:28.954401 systemd[1]: systemd-coredump@1-2136-0.service: Deactivated successfully. Jan 23 01:10:28.969043 amazon-ssm-agent[2187]: Initializing new seelog logger Jan 23 01:10:28.969449 amazon-ssm-agent[2187]: New Seelog Logger Creation Complete Jan 23 01:10:28.969449 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.969449 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.969823 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 processing appconfig overrides Jan 23 01:10:28.970365 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.970365 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.970475 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 processing appconfig overrides Jan 23 01:10:28.970781 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.970781 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.970888 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 processing appconfig overrides Jan 23 01:10:28.971376 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9702 INFO Proxy environment variables: Jan 23 01:10:28.973712 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.973712 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:28.974154 amazon-ssm-agent[2187]: 2026/01/23 01:10:28 processing appconfig overrides Jan 23 01:10:29.070945 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9702 INFO https_proxy: Jan 23 01:10:29.171064 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9702 INFO http_proxy: Jan 23 01:10:29.269632 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9702 INFO no_proxy: Jan 23 01:10:29.294400 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Jan 23 01:10:29.298865 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 01:10:29.336679 ntpd[2223]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:29.336758 ntpd[2223]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:35:52 UTC 2026 (1): Starting Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: ---------------------------------------------------- Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: corporation. Support and training for ntp-4 are Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: available at https://www.nwtime.org/support Jan 23 01:10:29.337189 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: ---------------------------------------------------- Jan 23 01:10:29.336768 ntpd[2223]: ---------------------------------------------------- Jan 23 01:10:29.338855 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: proto: precision = 0.073 usec (-24) Jan 23 01:10:29.336777 ntpd[2223]: ntp-4 is maintained by Network Time Foundation, Jan 23 01:10:29.336785 ntpd[2223]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 01:10:29.336793 ntpd[2223]: corporation. Support and training for ntp-4 are Jan 23 01:10:29.336801 ntpd[2223]: available at https://www.nwtime.org/support Jan 23 01:10:29.339131 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: basedate set to 2026-01-10 Jan 23 01:10:29.339131 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:29.336810 ntpd[2223]: ---------------------------------------------------- Jan 23 01:10:29.339263 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:29.339263 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:29.337542 ntpd[2223]: proto: precision = 0.073 usec (-24) Jan 23 01:10:29.339058 ntpd[2223]: basedate set to 2026-01-10 Jan 23 01:10:29.339074 ntpd[2223]: gps base set to 2026-01-11 (week 2401) Jan 23 01:10:29.339466 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:29.339466 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:29.339188 ntpd[2223]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 01:10:29.339217 ntpd[2223]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 01:10:29.339406 ntpd[2223]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 01:10:29.340719 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:29.340719 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listen normally on 5 eth0 [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:29.339434 ntpd[2223]: Listen normally on 3 eth0 172.31.20.229:123 Jan 23 01:10:29.340840 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: Listening on routing socket on fd #22 for interface updates Jan 23 01:10:29.340651 ntpd[2223]: Listen normally on 4 lo [::1]:123 Jan 23 01:10:29.340698 ntpd[2223]: Listen normally on 5 eth0 [fe80::4ae:12ff:fe4e:fab5%2]:123 Jan 23 01:10:29.340729 ntpd[2223]: Listening on routing socket on fd #22 for interface updates Jan 23 01:10:29.344020 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:29.344719 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:29.344719 ntpd[2223]: 23 Jan 01:10:29 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:29.344061 ntpd[2223]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 01:10:29.368641 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9704 INFO Checking if agent identity type OnPrem can be assumed Jan 23 01:10:29.467580 amazon-ssm-agent[2187]: 2026-01-23 01:10:28.9706 INFO Checking if agent identity type EC2 can be assumed Jan 23 01:10:29.538592 amazon-ssm-agent[2187]: 2026/01/23 01:10:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:29.538744 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 01:10:29.538873 amazon-ssm-agent[2187]: 2026/01/23 01:10:29 processing appconfig overrides Jan 23 01:10:29.566317 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0198 INFO Agent will take identity from EC2 Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0216 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0216 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0216 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0216 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0216 INFO [Registrar] Starting registrar module Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0229 INFO [EC2Identity] Checking disk for registration info Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0229 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 01:10:29.578505 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.0229 INFO [EC2Identity] Generating registration keypair Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.4930 INFO [EC2Identity] Checking write access before registering Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.4935 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5384 INFO [EC2Identity] EC2 registration was successful. Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5384 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5385 INFO [CredentialRefresher] credentialRefresher has started Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5385 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5781 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 01:10:29.579128 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5784 INFO [CredentialRefresher] Credentials ready Jan 23 01:10:29.664409 amazon-ssm-agent[2187]: 2026-01-23 01:10:29.5786 INFO [CredentialRefresher] Next credential rotation will be in 29.999991829283335 minutes Jan 23 01:10:30.235719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:30.236589 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:10:30.238636 systemd[1]: Startup finished in 2.626s (kernel) + 5.936s (initrd) + 6.721s (userspace) = 15.285s. Jan 23 01:10:30.245209 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:10:30.592776 amazon-ssm-agent[2187]: 2026-01-23 01:10:30.5923 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 01:10:30.694070 amazon-ssm-agent[2187]: 2026-01-23 01:10:30.5940 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2242) started Jan 23 01:10:30.794955 amazon-ssm-agent[2187]: 2026-01-23 01:10:30.5941 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 01:10:30.954058 kubelet[2231]: E0123 01:10:30.953764 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:10:30.956278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:10:30.956419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:10:30.956914 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 258.4M memory peak. Jan 23 01:10:31.888350 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:10:31.889730 systemd[1]: Started sshd@0-172.31.20.229:22-68.220.241.50:57878.service - OpenSSH per-connection server daemon (68.220.241.50:57878). Jan 23 01:10:32.395487 sshd[2257]: Accepted publickey for core from 68.220.241.50 port 57878 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:32.396160 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:32.404537 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:10:32.406483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:10:32.416311 systemd-logind[1934]: New session 1 of user core. Jan 23 01:10:32.430241 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:10:32.433547 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:10:32.448663 (systemd)[2262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:10:32.451841 systemd-logind[1934]: New session c1 of user core. Jan 23 01:10:32.610583 systemd[2262]: Queued start job for default target default.target. Jan 23 01:10:32.626107 systemd[2262]: Created slice app.slice - User Application Slice. Jan 23 01:10:32.626303 systemd[2262]: Reached target paths.target - Paths. Jan 23 01:10:32.626378 systemd[2262]: Reached target timers.target - Timers. Jan 23 01:10:32.627792 systemd[2262]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:10:32.640858 systemd[2262]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:10:32.640950 systemd[2262]: Reached target sockets.target - Sockets. Jan 23 01:10:32.641008 systemd[2262]: Reached target basic.target - Basic System. Jan 23 01:10:32.641056 systemd[2262]: Reached target default.target - Main User Target. Jan 23 01:10:32.641097 systemd[2262]: Startup finished in 181ms. Jan 23 01:10:32.641299 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:10:32.649824 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:10:33.008396 systemd[1]: Started sshd@1-172.31.20.229:22-68.220.241.50:57892.service - OpenSSH per-connection server daemon (68.220.241.50:57892). Jan 23 01:10:33.503863 sshd[2273]: Accepted publickey for core from 68.220.241.50 port 57892 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:33.505279 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:33.511977 systemd-logind[1934]: New session 2 of user core. Jan 23 01:10:33.521820 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:10:33.853970 sshd[2276]: Connection closed by 68.220.241.50 port 57892 Jan 23 01:10:33.855660 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:33.859400 systemd[1]: sshd@1-172.31.20.229:22-68.220.241.50:57892.service: Deactivated successfully. Jan 23 01:10:33.861299 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:10:33.863229 systemd-logind[1934]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:10:33.865209 systemd-logind[1934]: Removed session 2. Jan 23 01:10:33.942691 systemd[1]: Started sshd@2-172.31.20.229:22-68.220.241.50:57896.service - OpenSSH per-connection server daemon (68.220.241.50:57896). Jan 23 01:10:34.436608 sshd[2282]: Accepted publickey for core from 68.220.241.50 port 57896 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:34.437425 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:34.443871 systemd-logind[1934]: New session 3 of user core. Jan 23 01:10:34.448822 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:10:34.785912 sshd[2285]: Connection closed by 68.220.241.50 port 57896 Jan 23 01:10:34.787771 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:34.792080 systemd[1]: sshd@2-172.31.20.229:22-68.220.241.50:57896.service: Deactivated successfully. Jan 23 01:10:34.792757 systemd-logind[1934]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:10:34.794386 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:10:34.796452 systemd-logind[1934]: Removed session 3. Jan 23 01:10:34.898771 systemd[1]: Started sshd@3-172.31.20.229:22-68.220.241.50:57908.service - OpenSSH per-connection server daemon (68.220.241.50:57908). Jan 23 01:10:35.452694 sshd[2291]: Accepted publickey for core from 68.220.241.50 port 57908 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:35.454175 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:35.459596 systemd-logind[1934]: New session 4 of user core. Jan 23 01:10:35.464853 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:10:35.834256 sshd[2294]: Connection closed by 68.220.241.50 port 57908 Jan 23 01:10:35.835933 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:35.839855 systemd[1]: sshd@3-172.31.20.229:22-68.220.241.50:57908.service: Deactivated successfully. Jan 23 01:10:35.842011 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:10:35.842925 systemd-logind[1934]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:10:35.844728 systemd-logind[1934]: Removed session 4. Jan 23 01:10:35.919841 systemd[1]: Started sshd@4-172.31.20.229:22-68.220.241.50:57910.service - OpenSSH per-connection server daemon (68.220.241.50:57910). Jan 23 01:10:37.194429 systemd-resolved[1757]: Clock change detected. Flushing caches. Jan 23 01:10:37.275358 sshd[2300]: Accepted publickey for core from 68.220.241.50 port 57910 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:37.277040 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:37.283345 systemd-logind[1934]: New session 5 of user core. Jan 23 01:10:37.292537 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:10:37.577776 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:10:37.578069 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:37.594339 sudo[2304]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:37.672305 sshd[2303]: Connection closed by 68.220.241.50 port 57910 Jan 23 01:10:37.673633 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:37.678729 systemd[1]: sshd@4-172.31.20.229:22-68.220.241.50:57910.service: Deactivated successfully. Jan 23 01:10:37.680732 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:10:37.681854 systemd-logind[1934]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:10:37.683693 systemd-logind[1934]: Removed session 5. Jan 23 01:10:37.770230 systemd[1]: Started sshd@5-172.31.20.229:22-68.220.241.50:57926.service - OpenSSH per-connection server daemon (68.220.241.50:57926). Jan 23 01:10:38.263874 sshd[2310]: Accepted publickey for core from 68.220.241.50 port 57926 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:38.265199 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:38.272189 systemd-logind[1934]: New session 6 of user core. Jan 23 01:10:38.277519 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:10:38.536238 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:10:38.536537 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:38.544297 sudo[2315]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:38.550187 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:10:38.550592 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:38.561347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:10:38.605834 augenrules[2337]: No rules Jan 23 01:10:38.607122 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:10:38.607423 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:10:38.608789 sudo[2314]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:38.684841 sshd[2313]: Connection closed by 68.220.241.50 port 57926 Jan 23 01:10:38.686455 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:38.691213 systemd[1]: sshd@5-172.31.20.229:22-68.220.241.50:57926.service: Deactivated successfully. Jan 23 01:10:38.692894 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:10:38.693653 systemd-logind[1934]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:10:38.695048 systemd-logind[1934]: Removed session 6. Jan 23 01:10:38.776008 systemd[1]: Started sshd@6-172.31.20.229:22-68.220.241.50:57930.service - OpenSSH per-connection server daemon (68.220.241.50:57930). Jan 23 01:10:39.270712 sshd[2346]: Accepted publickey for core from 68.220.241.50 port 57930 ssh2: RSA SHA256:TjRK9JlVbt43cjCH9yNUnU6Xa0awhPYO1lN4GVbk/WA Jan 23 01:10:39.272111 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:39.278063 systemd-logind[1934]: New session 7 of user core. Jan 23 01:10:39.284512 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:10:39.543362 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:10:39.543745 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:10:40.331728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:40.332051 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 258.4M memory peak. Jan 23 01:10:40.335136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:40.378070 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit session-7.scope)... Jan 23 01:10:40.378089 systemd[1]: Reloading... Jan 23 01:10:40.475307 zram_generator::config[2424]: No configuration found. Jan 23 01:10:40.768099 systemd[1]: Reloading finished in 389 ms. Jan 23 01:10:40.834994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:10:40.835103 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:10:40.835454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:40.835519 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98.2M memory peak. Jan 23 01:10:40.837260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:10:41.052326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:10:41.062964 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:10:41.106306 kubelet[2490]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:10:41.106306 kubelet[2490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:10:41.106306 kubelet[2490]: I0123 01:10:41.105566 2490 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:10:41.741063 kubelet[2490]: I0123 01:10:41.741021 2490 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 01:10:41.741063 kubelet[2490]: I0123 01:10:41.741051 2490 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:10:41.741063 kubelet[2490]: I0123 01:10:41.741079 2490 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 01:10:41.741370 kubelet[2490]: I0123 01:10:41.741091 2490 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:10:41.741451 kubelet[2490]: I0123 01:10:41.741431 2490 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:10:41.746740 kubelet[2490]: I0123 01:10:41.746172 2490 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:10:41.754170 kubelet[2490]: I0123 01:10:41.754134 2490 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:10:41.757503 kubelet[2490]: I0123 01:10:41.757472 2490 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 01:10:41.758649 kubelet[2490]: I0123 01:10:41.758599 2490 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:10:41.758872 kubelet[2490]: I0123 01:10:41.758643 2490 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.20.229","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:10:41.759001 kubelet[2490]: I0123 01:10:41.758873 2490 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:10:41.759001 kubelet[2490]: I0123 01:10:41.758888 2490 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 01:10:41.759092 kubelet[2490]: I0123 01:10:41.759005 2490 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 01:10:41.780007 kubelet[2490]: I0123 01:10:41.779970 2490 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:41.780294 kubelet[2490]: I0123 01:10:41.780164 2490 kubelet.go:475] "Attempting to sync node with API server" Jan 23 01:10:41.780294 kubelet[2490]: I0123 01:10:41.780181 2490 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:10:41.780294 kubelet[2490]: I0123 01:10:41.780204 2490 kubelet.go:387] "Adding apiserver pod source" Jan 23 01:10:41.780294 kubelet[2490]: I0123 01:10:41.780236 2490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:10:41.780831 kubelet[2490]: E0123 01:10:41.780797 2490 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:41.780907 kubelet[2490]: E0123 01:10:41.780847 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:41.784734 kubelet[2490]: I0123 01:10:41.784704 2490 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:10:41.785249 kubelet[2490]: I0123 01:10:41.785214 2490 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:10:41.785397 kubelet[2490]: I0123 01:10:41.785257 2490 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 01:10:41.785959 kubelet[2490]: W0123 01:10:41.785922 2490 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:10:41.790043 kubelet[2490]: E0123 01:10:41.789991 2490 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:10:41.790436 kubelet[2490]: E0123 01:10:41.790415 2490 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.20.229\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:10:41.793299 kubelet[2490]: I0123 01:10:41.793151 2490 server.go:1262] "Started kubelet" Jan 23 01:10:41.794774 kubelet[2490]: I0123 01:10:41.794482 2490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:10:41.796258 kubelet[2490]: I0123 01:10:41.795753 2490 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:10:41.798925 kubelet[2490]: I0123 01:10:41.798905 2490 server.go:310] "Adding debug handlers to kubelet server" Jan 23 01:10:41.804027 kubelet[2490]: I0123 01:10:41.803991 2490 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:10:41.804145 kubelet[2490]: I0123 01:10:41.804111 2490 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:10:41.805424 kubelet[2490]: I0123 01:10:41.805383 2490 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:10:41.805533 kubelet[2490]: I0123 01:10:41.805436 2490 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 01:10:41.805666 kubelet[2490]: I0123 01:10:41.805628 2490 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:10:41.805915 kubelet[2490]: I0123 01:10:41.805900 2490 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:10:41.809915 kubelet[2490]: I0123 01:10:41.809875 2490 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 01:10:41.810025 kubelet[2490]: E0123 01:10:41.809998 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:41.811090 kubelet[2490]: I0123 01:10:41.810641 2490 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 01:10:41.811090 kubelet[2490]: I0123 01:10:41.810699 2490 reconciler.go:29] "Reconciler: start to sync state" Jan 23 01:10:41.813146 kubelet[2490]: E0123 01:10:41.813094 2490 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:10:41.813236 kubelet[2490]: I0123 01:10:41.813150 2490 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:10:41.829236 kubelet[2490]: E0123 01:10:41.829178 2490 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:10:41.831927 kubelet[2490]: E0123 01:10:41.831415 2490 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.229\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 23 01:10:41.832457 kubelet[2490]: E0123 01:10:41.829608 2490 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.229.188d36f209564403 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.229,UID:172.31.20.229,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.229,},FirstTimestamp:2026-01-23 01:10:41.793106947 +0000 UTC m=+0.726458517,LastTimestamp:2026-01-23 01:10:41.793106947 +0000 UTC m=+0.726458517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.229,}" Jan 23 01:10:41.835875 kubelet[2490]: E0123 01:10:41.835641 2490 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.229.188d36f20a86f37e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.229,UID:172.31.20.229,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.229,},FirstTimestamp:2026-01-23 01:10:41.813074814 +0000 UTC m=+0.746426378,LastTimestamp:2026-01-23 01:10:41.813074814 +0000 UTC m=+0.746426378,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.229,}" Jan 23 01:10:41.843296 kubelet[2490]: I0123 01:10:41.843132 2490 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:10:41.843296 kubelet[2490]: I0123 01:10:41.843151 2490 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:10:41.843296 kubelet[2490]: I0123 01:10:41.843173 2490 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:10:41.849526 kubelet[2490]: I0123 01:10:41.849205 2490 policy_none.go:49] "None policy: Start" Jan 23 01:10:41.849526 kubelet[2490]: I0123 01:10:41.849234 2490 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 01:10:41.849526 kubelet[2490]: I0123 01:10:41.849249 2490 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 01:10:41.856872 kubelet[2490]: I0123 01:10:41.856832 2490 policy_none.go:47] "Start" Jan 23 01:10:41.862853 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:10:41.874883 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:10:41.880490 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:10:41.891051 kubelet[2490]: E0123 01:10:41.890563 2490 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:10:41.891051 kubelet[2490]: I0123 01:10:41.890774 2490 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:10:41.891051 kubelet[2490]: I0123 01:10:41.890784 2490 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:10:41.892418 kubelet[2490]: I0123 01:10:41.892396 2490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:10:41.893800 kubelet[2490]: E0123 01:10:41.893764 2490 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:10:41.893887 kubelet[2490]: E0123 01:10:41.893813 2490 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.229\" not found" Jan 23 01:10:41.924552 kubelet[2490]: I0123 01:10:41.924503 2490 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 01:10:41.926180 kubelet[2490]: I0123 01:10:41.926145 2490 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 01:10:41.926180 kubelet[2490]: I0123 01:10:41.926170 2490 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 01:10:41.926342 kubelet[2490]: I0123 01:10:41.926195 2490 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 01:10:41.926342 kubelet[2490]: E0123 01:10:41.926243 2490 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 01:10:41.993702 kubelet[2490]: I0123 01:10:41.993577 2490 kubelet_node_status.go:75] "Attempting to register node" node="172.31.20.229" Jan 23 01:10:42.005147 kubelet[2490]: I0123 01:10:42.004981 2490 kubelet_node_status.go:78] "Successfully registered node" node="172.31.20.229" Jan 23 01:10:42.005147 kubelet[2490]: E0123 01:10:42.005020 2490 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172.31.20.229\": node \"172.31.20.229\" not found" Jan 23 01:10:42.029736 kubelet[2490]: E0123 01:10:42.029704 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.068416 sudo[2350]: pam_unix(sudo:session): session closed for user root Jan 23 01:10:42.130171 kubelet[2490]: E0123 01:10:42.130122 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.144302 sshd[2349]: Connection closed by 68.220.241.50 port 57930 Jan 23 01:10:42.144852 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:42.149400 systemd[1]: sshd@6-172.31.20.229:22-68.220.241.50:57930.service: Deactivated successfully. Jan 23 01:10:42.151492 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:10:42.151775 systemd[1]: session-7.scope: Consumed 503ms CPU time, 73.8M memory peak. Jan 23 01:10:42.153614 systemd-logind[1934]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:10:42.155669 systemd-logind[1934]: Removed session 7. Jan 23 01:10:42.230890 kubelet[2490]: E0123 01:10:42.230831 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.331332 kubelet[2490]: E0123 01:10:42.331195 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.431987 kubelet[2490]: E0123 01:10:42.431945 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.533042 kubelet[2490]: E0123 01:10:42.532997 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.634172 kubelet[2490]: E0123 01:10:42.634119 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.735215 kubelet[2490]: E0123 01:10:42.735168 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.743647 kubelet[2490]: I0123 01:10:42.743427 2490 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 01:10:42.743647 kubelet[2490]: I0123 01:10:42.743610 2490 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 01:10:42.782074 kubelet[2490]: E0123 01:10:42.782027 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:42.835875 kubelet[2490]: E0123 01:10:42.835828 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:42.936742 kubelet[2490]: E0123 01:10:42.936628 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:43.037298 kubelet[2490]: E0123 01:10:43.037236 2490 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.20.229\" not found" Jan 23 01:10:43.139221 kubelet[2490]: I0123 01:10:43.139191 2490 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 01:10:43.139683 containerd[1969]: time="2026-01-23T01:10:43.139522432Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:10:43.140039 kubelet[2490]: I0123 01:10:43.139678 2490 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 01:10:43.782268 kubelet[2490]: I0123 01:10:43.782216 2490 apiserver.go:52] "Watching apiserver" Jan 23 01:10:43.782630 kubelet[2490]: E0123 01:10:43.782236 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:43.797306 kubelet[2490]: E0123 01:10:43.797204 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:43.809329 systemd[1]: Created slice kubepods-besteffort-poda1ba0092_c6a8_4d92_9fb5_3a6ee16838d6.slice - libcontainer container kubepods-besteffort-poda1ba0092_c6a8_4d92_9fb5_3a6ee16838d6.slice. Jan 23 01:10:43.811329 kubelet[2490]: I0123 01:10:43.811270 2490 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 01:10:43.822309 kubelet[2490]: I0123 01:10:43.821689 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-lib-modules\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822309 kubelet[2490]: I0123 01:10:43.821734 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a50c5366-bf6e-4623-b937-7340f896a885-tigera-ca-bundle\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822309 kubelet[2490]: I0123 01:10:43.821768 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-var-run-calico\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822309 kubelet[2490]: I0123 01:10:43.821797 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c3cd778-85af-4d2a-a9f4-071f6d9e5f64-registration-dir\") pod \"csi-node-driver-hgbcs\" (UID: \"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64\") " pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:43.822309 kubelet[2490]: I0123 01:10:43.821828 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfxp\" (UniqueName: \"kubernetes.io/projected/a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6-kube-api-access-9kfxp\") pod \"kube-proxy-z55pm\" (UID: \"a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6\") " pod="kube-system/kube-proxy-z55pm" Jan 23 01:10:43.822629 kubelet[2490]: I0123 01:10:43.821856 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-cni-net-dir\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822629 kubelet[2490]: I0123 01:10:43.821882 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c3cd778-85af-4d2a-a9f4-071f6d9e5f64-socket-dir\") pod \"csi-node-driver-hgbcs\" (UID: \"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64\") " pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:43.822629 kubelet[2490]: I0123 01:10:43.821938 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dshp\" (UniqueName: \"kubernetes.io/projected/4c3cd778-85af-4d2a-a9f4-071f6d9e5f64-kube-api-access-2dshp\") pod \"csi-node-driver-hgbcs\" (UID: \"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64\") " pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:43.822629 kubelet[2490]: I0123 01:10:43.821974 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6-xtables-lock\") pod \"kube-proxy-z55pm\" (UID: \"a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6\") " pod="kube-system/kube-proxy-z55pm" Jan 23 01:10:43.822629 kubelet[2490]: I0123 01:10:43.821997 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6-lib-modules\") pod \"kube-proxy-z55pm\" (UID: \"a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6\") " pod="kube-system/kube-proxy-z55pm" Jan 23 01:10:43.822850 kubelet[2490]: I0123 01:10:43.822026 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-cni-bin-dir\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822850 kubelet[2490]: I0123 01:10:43.822053 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-cni-log-dir\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822850 kubelet[2490]: I0123 01:10:43.822080 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a50c5366-bf6e-4623-b937-7340f896a885-node-certs\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822850 kubelet[2490]: I0123 01:10:43.822109 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-policysync\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.822850 kubelet[2490]: I0123 01:10:43.822130 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-xtables-lock\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.823038 kubelet[2490]: I0123 01:10:43.822159 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2cs\" (UniqueName: \"kubernetes.io/projected/a50c5366-bf6e-4623-b937-7340f896a885-kube-api-access-jx2cs\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.823038 kubelet[2490]: I0123 01:10:43.822186 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c3cd778-85af-4d2a-a9f4-071f6d9e5f64-kubelet-dir\") pod \"csi-node-driver-hgbcs\" (UID: \"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64\") " pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:43.823038 kubelet[2490]: I0123 01:10:43.822244 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c3cd778-85af-4d2a-a9f4-071f6d9e5f64-varrun\") pod \"csi-node-driver-hgbcs\" (UID: \"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64\") " pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:43.823038 kubelet[2490]: I0123 01:10:43.822271 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-flexvol-driver-host\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.823038 kubelet[2490]: I0123 01:10:43.822312 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a50c5366-bf6e-4623-b937-7340f896a885-var-lib-calico\") pod \"calico-node-vgcm7\" (UID: \"a50c5366-bf6e-4623-b937-7340f896a885\") " pod="calico-system/calico-node-vgcm7" Jan 23 01:10:43.825072 kubelet[2490]: I0123 01:10:43.822338 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6-kube-proxy\") pod \"kube-proxy-z55pm\" (UID: \"a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6\") " pod="kube-system/kube-proxy-z55pm" Jan 23 01:10:43.827425 systemd[1]: Created slice kubepods-besteffort-poda50c5366_bf6e_4623_b937_7340f896a885.slice - libcontainer container kubepods-besteffort-poda50c5366_bf6e_4623_b937_7340f896a885.slice. Jan 23 01:10:43.925298 kubelet[2490]: E0123 01:10:43.925022 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925298 kubelet[2490]: W0123 01:10:43.925045 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925298 kubelet[2490]: E0123 01:10:43.925064 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.925298 kubelet[2490]: E0123 01:10:43.925224 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925298 kubelet[2490]: W0123 01:10:43.925230 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925298 kubelet[2490]: E0123 01:10:43.925238 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.925543 kubelet[2490]: E0123 01:10:43.925380 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925543 kubelet[2490]: W0123 01:10:43.925386 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925543 kubelet[2490]: E0123 01:10:43.925393 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.925615 kubelet[2490]: E0123 01:10:43.925571 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925615 kubelet[2490]: W0123 01:10:43.925597 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925615 kubelet[2490]: E0123 01:10:43.925605 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.925813 kubelet[2490]: E0123 01:10:43.925800 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925841 kubelet[2490]: W0123 01:10:43.925813 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925841 kubelet[2490]: E0123 01:10:43.925822 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.925966 kubelet[2490]: E0123 01:10:43.925955 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.925995 kubelet[2490]: W0123 01:10:43.925966 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.925995 kubelet[2490]: E0123 01:10:43.925974 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.926120 kubelet[2490]: E0123 01:10:43.926110 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.926146 kubelet[2490]: W0123 01:10:43.926120 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.926146 kubelet[2490]: E0123 01:10:43.926127 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.928299 kubelet[2490]: E0123 01:10:43.926642 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.928299 kubelet[2490]: W0123 01:10:43.926657 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.928299 kubelet[2490]: E0123 01:10:43.926758 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.928299 kubelet[2490]: E0123 01:10:43.927104 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.928299 kubelet[2490]: W0123 01:10:43.927112 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.928299 kubelet[2490]: E0123 01:10:43.927121 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.928501 kubelet[2490]: E0123 01:10:43.928331 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.928501 kubelet[2490]: W0123 01:10:43.928339 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.928501 kubelet[2490]: E0123 01:10:43.928349 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.928614 kubelet[2490]: E0123 01:10:43.928508 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.928614 kubelet[2490]: W0123 01:10:43.928514 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.928614 kubelet[2490]: E0123 01:10:43.928521 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.928683 kubelet[2490]: E0123 01:10:43.928663 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.928683 kubelet[2490]: W0123 01:10:43.928668 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.928683 kubelet[2490]: E0123 01:10:43.928675 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.929003 kubelet[2490]: E0123 01:10:43.928988 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.929131 kubelet[2490]: W0123 01:10:43.929006 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.929131 kubelet[2490]: E0123 01:10:43.929127 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.931678 kubelet[2490]: E0123 01:10:43.931648 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.931678 kubelet[2490]: W0123 01:10:43.931664 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.931678 kubelet[2490]: E0123 01:10:43.931676 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.931879 kubelet[2490]: E0123 01:10:43.931862 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.931879 kubelet[2490]: W0123 01:10:43.931874 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.931953 kubelet[2490]: E0123 01:10:43.931882 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.932476 kubelet[2490]: E0123 01:10:43.932410 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.932476 kubelet[2490]: W0123 01:10:43.932421 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.932476 kubelet[2490]: E0123 01:10:43.932430 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934329 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.934750 kubelet[2490]: W0123 01:10:43.934342 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934353 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934570 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.934750 kubelet[2490]: W0123 01:10:43.934577 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934585 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934709 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.934750 kubelet[2490]: W0123 01:10:43.934714 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.934750 kubelet[2490]: E0123 01:10:43.934720 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.935098 kubelet[2490]: E0123 01:10:43.934893 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.935098 kubelet[2490]: W0123 01:10:43.934900 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.935098 kubelet[2490]: E0123 01:10:43.934908 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.938297 kubelet[2490]: E0123 01:10:43.936501 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.938297 kubelet[2490]: W0123 01:10:43.936517 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.938297 kubelet[2490]: E0123 01:10:43.936527 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.938297 kubelet[2490]: E0123 01:10:43.936687 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.938297 kubelet[2490]: W0123 01:10:43.936693 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.938297 kubelet[2490]: E0123 01:10:43.936700 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.938533 kubelet[2490]: E0123 01:10:43.938398 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.938533 kubelet[2490]: W0123 01:10:43.938409 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.938533 kubelet[2490]: E0123 01:10:43.938423 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.938629 kubelet[2490]: E0123 01:10:43.938614 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.938629 kubelet[2490]: W0123 01:10:43.938621 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.938703 kubelet[2490]: E0123 01:10:43.938630 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.940403 kubelet[2490]: E0123 01:10:43.940360 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.940403 kubelet[2490]: W0123 01:10:43.940370 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.940403 kubelet[2490]: E0123 01:10:43.940380 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.940603 kubelet[2490]: E0123 01:10:43.940537 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.940603 kubelet[2490]: W0123 01:10:43.940543 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.940603 kubelet[2490]: E0123 01:10:43.940550 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.940885 kubelet[2490]: E0123 01:10:43.940705 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.940885 kubelet[2490]: W0123 01:10:43.940711 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.940885 kubelet[2490]: E0123 01:10:43.940718 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.940885 kubelet[2490]: E0123 01:10:43.940833 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.940885 kubelet[2490]: W0123 01:10:43.940838 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.940885 kubelet[2490]: E0123 01:10:43.940844 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.940953 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950133 kubelet[2490]: W0123 01:10:43.940958 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.940964 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.941353 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950133 kubelet[2490]: W0123 01:10:43.941361 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.941369 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.941538 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950133 kubelet[2490]: W0123 01:10:43.941543 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.941550 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950133 kubelet[2490]: E0123 01:10:43.941792 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950564 kubelet[2490]: W0123 01:10:43.941807 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.941824 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.942072 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950564 kubelet[2490]: W0123 01:10:43.942081 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.942092 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.942338 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950564 kubelet[2490]: W0123 01:10:43.942348 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.942359 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950564 kubelet[2490]: E0123 01:10:43.942651 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950564 kubelet[2490]: W0123 01:10:43.942664 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.942678 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.942926 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950920 kubelet[2490]: W0123 01:10:43.942937 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.942950 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.943170 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950920 kubelet[2490]: W0123 01:10:43.943179 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.943190 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.943501 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.950920 kubelet[2490]: W0123 01:10:43.943511 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.950920 kubelet[2490]: E0123 01:10:43.943522 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.943740 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.951300 kubelet[2490]: W0123 01:10:43.943748 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.943759 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.943990 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.951300 kubelet[2490]: W0123 01:10:43.944000 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.944010 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.944547 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.951300 kubelet[2490]: W0123 01:10:43.944558 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.951300 kubelet[2490]: E0123 01:10:43.944570 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:43.960067 kubelet[2490]: E0123 01:10:43.959976 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:10:43.960067 kubelet[2490]: W0123 01:10:43.959999 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:10:43.960067 kubelet[2490]: E0123 01:10:43.960022 2490 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:10:44.125468 containerd[1969]: time="2026-01-23T01:10:44.125417150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z55pm,Uid:a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6,Namespace:kube-system,Attempt:0,}" Jan 23 01:10:44.133757 containerd[1969]: time="2026-01-23T01:10:44.133654198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgcm7,Uid:a50c5366-bf6e-4623-b937-7340f896a885,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:44.689591 containerd[1969]: time="2026-01-23T01:10:44.689535037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:44.694723 containerd[1969]: time="2026-01-23T01:10:44.694661557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:10:44.696918 containerd[1969]: time="2026-01-23T01:10:44.696847663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:44.699458 containerd[1969]: time="2026-01-23T01:10:44.699394182Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:44.701037 containerd[1969]: time="2026-01-23T01:10:44.700990981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 01:10:44.704084 containerd[1969]: time="2026-01-23T01:10:44.704046614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:10:44.706067 containerd[1969]: time="2026-01-23T01:10:44.706033070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 577.219052ms" Jan 23 01:10:44.710855 containerd[1969]: time="2026-01-23T01:10:44.710654185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.520447ms" Jan 23 01:10:44.754982 containerd[1969]: time="2026-01-23T01:10:44.754918340Z" level=info msg="connecting to shim 8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb" address="unix:///run/containerd/s/3651dbcf0efe66537fa6a76ebe5d3960a73efea9c3934bc84b9551b88765e2bb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:44.755353 containerd[1969]: time="2026-01-23T01:10:44.755189615Z" level=info msg="connecting to shim 4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51" address="unix:///run/containerd/s/067c36bcb064027ec0dfb6d15d453d581185219822b978d2b56ae45bf281e9e7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:10:44.782586 kubelet[2490]: E0123 01:10:44.782551 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:44.807546 systemd[1]: Started cri-containerd-4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51.scope - libcontainer container 4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51. Jan 23 01:10:44.808814 systemd[1]: Started cri-containerd-8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb.scope - libcontainer container 8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb. Jan 23 01:10:44.848146 containerd[1969]: time="2026-01-23T01:10:44.848093873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vgcm7,Uid:a50c5366-bf6e-4623-b937-7340f896a885,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\"" Jan 23 01:10:44.851257 containerd[1969]: time="2026-01-23T01:10:44.851223791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:10:44.851593 containerd[1969]: time="2026-01-23T01:10:44.851527368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z55pm,Uid:a1ba0092-c6a8-4d92-9fb5-3a6ee16838d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb\"" Jan 23 01:10:44.927344 kubelet[2490]: E0123 01:10:44.927304 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:44.934802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450410056.mount: Deactivated successfully. Jan 23 01:10:45.783324 kubelet[2490]: E0123 01:10:45.783220 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:46.092039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713327102.mount: Deactivated successfully. Jan 23 01:10:46.210995 containerd[1969]: time="2026-01-23T01:10:46.210945805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:46.213073 containerd[1969]: time="2026-01-23T01:10:46.212881005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 01:10:46.215489 containerd[1969]: time="2026-01-23T01:10:46.215446597Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:46.218678 containerd[1969]: time="2026-01-23T01:10:46.218637719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:46.219199 containerd[1969]: time="2026-01-23T01:10:46.219173300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.367913757s" Jan 23 01:10:46.219308 containerd[1969]: time="2026-01-23T01:10:46.219294015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:10:46.220818 containerd[1969]: time="2026-01-23T01:10:46.220788523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 01:10:46.225969 containerd[1969]: time="2026-01-23T01:10:46.225933565Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:10:46.240295 containerd[1969]: time="2026-01-23T01:10:46.240011948Z" level=info msg="Container 30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:46.263188 containerd[1969]: time="2026-01-23T01:10:46.263130055Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612\"" Jan 23 01:10:46.263835 containerd[1969]: time="2026-01-23T01:10:46.263811520Z" level=info msg="StartContainer for \"30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612\"" Jan 23 01:10:46.265355 containerd[1969]: time="2026-01-23T01:10:46.265269753Z" level=info msg="connecting to shim 30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612" address="unix:///run/containerd/s/067c36bcb064027ec0dfb6d15d453d581185219822b978d2b56ae45bf281e9e7" protocol=ttrpc version=3 Jan 23 01:10:46.300522 systemd[1]: Started cri-containerd-30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612.scope - libcontainer container 30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612. Jan 23 01:10:46.388437 containerd[1969]: time="2026-01-23T01:10:46.388389192Z" level=info msg="StartContainer for \"30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612\" returns successfully" Jan 23 01:10:46.394920 systemd[1]: cri-containerd-30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612.scope: Deactivated successfully. Jan 23 01:10:46.398230 containerd[1969]: time="2026-01-23T01:10:46.398178541Z" level=info msg="received container exit event container_id:\"30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612\" id:\"30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612\" pid:2696 exited_at:{seconds:1769130646 nanos:397673503}" Jan 23 01:10:46.784370 kubelet[2490]: E0123 01:10:46.783560 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:46.927321 kubelet[2490]: E0123 01:10:46.927266 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:47.064872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30b5fe24cf808d302cea51fddd13c35074d3b3e994afd739d52e6078c789b612-rootfs.mount: Deactivated successfully. Jan 23 01:10:47.424120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462330623.mount: Deactivated successfully. Jan 23 01:10:47.785005 kubelet[2490]: E0123 01:10:47.784807 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:47.845624 containerd[1969]: time="2026-01-23T01:10:47.845572078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:47.847651 containerd[1969]: time="2026-01-23T01:10:47.847595589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 23 01:10:47.850227 containerd[1969]: time="2026-01-23T01:10:47.850158428Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:47.853681 containerd[1969]: time="2026-01-23T01:10:47.853619692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:47.854988 containerd[1969]: time="2026-01-23T01:10:47.854169028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.633348942s" Jan 23 01:10:47.854988 containerd[1969]: time="2026-01-23T01:10:47.854196554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 01:10:47.856345 containerd[1969]: time="2026-01-23T01:10:47.856322170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:10:47.860683 containerd[1969]: time="2026-01-23T01:10:47.860633388Z" level=info msg="CreateContainer within sandbox \"8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:10:47.878318 containerd[1969]: time="2026-01-23T01:10:47.876830953Z" level=info msg="Container e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:47.880996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291964004.mount: Deactivated successfully. Jan 23 01:10:47.892508 containerd[1969]: time="2026-01-23T01:10:47.892465853Z" level=info msg="CreateContainer within sandbox \"8df82976aad40aebe10c2f431b2f04ce91f498a7c1f92ab38a02281f4fda9ceb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a\"" Jan 23 01:10:47.893044 containerd[1969]: time="2026-01-23T01:10:47.893018138Z" level=info msg="StartContainer for \"e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a\"" Jan 23 01:10:47.894300 containerd[1969]: time="2026-01-23T01:10:47.894255140Z" level=info msg="connecting to shim e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a" address="unix:///run/containerd/s/3651dbcf0efe66537fa6a76ebe5d3960a73efea9c3934bc84b9551b88765e2bb" protocol=ttrpc version=3 Jan 23 01:10:47.918712 systemd[1]: Started cri-containerd-e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a.scope - libcontainer container e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a. Jan 23 01:10:47.998108 containerd[1969]: time="2026-01-23T01:10:47.998068788Z" level=info msg="StartContainer for \"e8c00494d0560d19ea8ce10e61cb4320fb094998a307cfedc749d4d53f79cd5a\" returns successfully" Jan 23 01:10:48.785710 kubelet[2490]: E0123 01:10:48.785668 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:48.926916 kubelet[2490]: E0123 01:10:48.926874 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:49.785813 kubelet[2490]: E0123 01:10:49.785758 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:50.681369 containerd[1969]: time="2026-01-23T01:10:50.681317112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:50.685382 containerd[1969]: time="2026-01-23T01:10:50.685104863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:10:50.685939 containerd[1969]: time="2026-01-23T01:10:50.685868513Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:50.691435 containerd[1969]: time="2026-01-23T01:10:50.691224338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:50.691887 containerd[1969]: time="2026-01-23T01:10:50.691854512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.835506379s" Jan 23 01:10:50.691887 containerd[1969]: time="2026-01-23T01:10:50.691889089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:10:50.698159 containerd[1969]: time="2026-01-23T01:10:50.698120425Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:10:50.715742 containerd[1969]: time="2026-01-23T01:10:50.714642854Z" level=info msg="Container be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:50.718156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199070329.mount: Deactivated successfully. Jan 23 01:10:50.730345 containerd[1969]: time="2026-01-23T01:10:50.730300915Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63\"" Jan 23 01:10:50.730978 containerd[1969]: time="2026-01-23T01:10:50.730935458Z" level=info msg="StartContainer for \"be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63\"" Jan 23 01:10:50.732440 containerd[1969]: time="2026-01-23T01:10:50.732399724Z" level=info msg="connecting to shim be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63" address="unix:///run/containerd/s/067c36bcb064027ec0dfb6d15d453d581185219822b978d2b56ae45bf281e9e7" protocol=ttrpc version=3 Jan 23 01:10:50.757500 systemd[1]: Started cri-containerd-be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63.scope - libcontainer container be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63. Jan 23 01:10:50.786983 kubelet[2490]: E0123 01:10:50.786885 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:50.829029 containerd[1969]: time="2026-01-23T01:10:50.828988450Z" level=info msg="StartContainer for \"be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63\" returns successfully" Jan 23 01:10:50.927687 kubelet[2490]: E0123 01:10:50.927356 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:51.018817 kubelet[2490]: I0123 01:10:51.018482 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z55pm" podStartSLOduration=6.015720017 podStartE2EDuration="9.018379577s" podCreationTimestamp="2026-01-23 01:10:42 +0000 UTC" firstStartedPulling="2026-01-23 01:10:44.852549066 +0000 UTC m=+3.785900625" lastFinishedPulling="2026-01-23 01:10:47.855208638 +0000 UTC m=+6.788560185" observedRunningTime="2026-01-23 01:10:49.002338389 +0000 UTC m=+7.935689963" watchObservedRunningTime="2026-01-23 01:10:51.018379577 +0000 UTC m=+9.951731170" Jan 23 01:10:51.373635 systemd[1]: cri-containerd-be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63.scope: Deactivated successfully. Jan 23 01:10:51.374098 systemd[1]: cri-containerd-be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63.scope: Consumed 582ms CPU time, 191.8M memory peak, 171.3M written to disk. Jan 23 01:10:51.378771 containerd[1969]: time="2026-01-23T01:10:51.378599652Z" level=info msg="received container exit event container_id:\"be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63\" id:\"be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63\" pid:2926 exited_at:{seconds:1769130651 nanos:377961446}" Jan 23 01:10:51.403372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be858232391a39564e5f349c594d6b8ac356078da8fb7ff0533c6da014ea6d63-rootfs.mount: Deactivated successfully. Jan 23 01:10:51.449984 kubelet[2490]: I0123 01:10:51.449943 2490 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 01:10:51.787888 kubelet[2490]: E0123 01:10:51.787730 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:52.788729 kubelet[2490]: E0123 01:10:52.788658 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:52.933181 systemd[1]: Created slice kubepods-besteffort-pod4c3cd778_85af_4d2a_a9f4_071f6d9e5f64.slice - libcontainer container kubepods-besteffort-pod4c3cd778_85af_4d2a_a9f4_071f6d9e5f64.slice. Jan 23 01:10:52.939459 containerd[1969]: time="2026-01-23T01:10:52.939417111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgbcs,Uid:4c3cd778-85af-4d2a-a9f4-071f6d9e5f64,Namespace:calico-system,Attempt:0,}" Jan 23 01:10:52.998983 containerd[1969]: time="2026-01-23T01:10:52.998940313Z" level=error msg="Failed to destroy network for sandbox \"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:53.002658 containerd[1969]: time="2026-01-23T01:10:53.002605344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgbcs,Uid:4c3cd778-85af-4d2a-a9f4-071f6d9e5f64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:53.003025 systemd[1]: run-netns-cni\x2df3bdae2f\x2d79a0\x2d0afb\x2dff1b\x2d32d7606b9b89.mount: Deactivated successfully. Jan 23 01:10:53.003455 kubelet[2490]: E0123 01:10:53.003391 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:53.003883 kubelet[2490]: E0123 01:10:53.003811 2490 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:53.003883 kubelet[2490]: E0123 01:10:53.003845 2490 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgbcs" Jan 23 01:10:53.004235 kubelet[2490]: E0123 01:10:53.004011 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa8578081d1c1f23fa90733e094051758e0c6f1b2605535d39dc157edd2cb092\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:10:53.010153 containerd[1969]: time="2026-01-23T01:10:53.010103635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:10:53.789644 kubelet[2490]: E0123 01:10:53.789591 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:54.791105 kubelet[2490]: E0123 01:10:54.790753 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:55.791196 kubelet[2490]: E0123 01:10:55.791156 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:56.791838 kubelet[2490]: E0123 01:10:56.791795 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:57.361143 systemd[1]: Created slice kubepods-besteffort-pod2405efdb_84ab_4289_8edd_5b140fdebe83.slice - libcontainer container kubepods-besteffort-pod2405efdb_84ab_4289_8edd_5b140fdebe83.slice. Jan 23 01:10:57.417180 kubelet[2490]: I0123 01:10:57.417138 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8t4f\" (UniqueName: \"kubernetes.io/projected/2405efdb-84ab-4289-8edd-5b140fdebe83-kube-api-access-r8t4f\") pod \"nginx-deployment-bb8f74bfb-dskvs\" (UID: \"2405efdb-84ab-4289-8edd-5b140fdebe83\") " pod="default/nginx-deployment-bb8f74bfb-dskvs" Jan 23 01:10:57.671529 containerd[1969]: time="2026-01-23T01:10:57.671395527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dskvs,Uid:2405efdb-84ab-4289-8edd-5b140fdebe83,Namespace:default,Attempt:0,}" Jan 23 01:10:57.793242 kubelet[2490]: E0123 01:10:57.793137 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:57.809420 containerd[1969]: time="2026-01-23T01:10:57.809369834Z" level=error msg="Failed to destroy network for sandbox \"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:57.812997 systemd[1]: run-netns-cni\x2d79d3f9dc\x2deed6\x2d5476\x2d647d\x2d3e5b99838b56.mount: Deactivated successfully. Jan 23 01:10:57.814933 containerd[1969]: time="2026-01-23T01:10:57.814881483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dskvs,Uid:2405efdb-84ab-4289-8edd-5b140fdebe83,Namespace:default,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:57.815394 kubelet[2490]: E0123 01:10:57.815352 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:10:57.815554 kubelet[2490]: E0123 01:10:57.815534 2490 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-dskvs" Jan 23 01:10:57.815698 kubelet[2490]: E0123 01:10:57.815675 2490 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-dskvs" Jan 23 01:10:57.816573 kubelet[2490]: E0123 01:10:57.816256 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-bb8f74bfb-dskvs_default(2405efdb-84ab-4289-8edd-5b140fdebe83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-bb8f74bfb-dskvs_default(2405efdb-84ab-4289-8edd-5b140fdebe83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"651a232abd11c733cadf4cde673fe1631c5709f4087b1306bb76689c2732533b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-dskvs" podUID="2405efdb-84ab-4289-8edd-5b140fdebe83" Jan 23 01:10:58.575884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724536241.mount: Deactivated successfully. Jan 23 01:10:58.622864 containerd[1969]: time="2026-01-23T01:10:58.622797113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.624881 containerd[1969]: time="2026-01-23T01:10:58.624821843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:10:58.627271 containerd[1969]: time="2026-01-23T01:10:58.627204037Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.630563 containerd[1969]: time="2026-01-23T01:10:58.630312809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:10:58.631008 containerd[1969]: time="2026-01-23T01:10:58.630865639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.620719614s" Jan 23 01:10:58.631008 containerd[1969]: time="2026-01-23T01:10:58.630900905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:10:58.648020 containerd[1969]: time="2026-01-23T01:10:58.647973592Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:10:58.668110 containerd[1969]: time="2026-01-23T01:10:58.664619309Z" level=info msg="Container 5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:10:58.688045 containerd[1969]: time="2026-01-23T01:10:58.687989355Z" level=info msg="CreateContainer within sandbox \"4c1ee7200185ee4992f7f3acd12252380b0268a74d4e57afc4feada3c9e70a51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052\"" Jan 23 01:10:58.690312 containerd[1969]: time="2026-01-23T01:10:58.688724702Z" level=info msg="StartContainer for \"5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052\"" Jan 23 01:10:58.690312 containerd[1969]: time="2026-01-23T01:10:58.690217008Z" level=info msg="connecting to shim 5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052" address="unix:///run/containerd/s/067c36bcb064027ec0dfb6d15d453d581185219822b978d2b56ae45bf281e9e7" protocol=ttrpc version=3 Jan 23 01:10:58.746619 systemd[1]: Started cri-containerd-5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052.scope - libcontainer container 5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052. Jan 23 01:10:58.794222 kubelet[2490]: E0123 01:10:58.794160 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:58.830713 containerd[1969]: time="2026-01-23T01:10:58.830571904Z" level=info msg="StartContainer for \"5f2db3078fa41ebeda5ae424e4f7866e3440a18ca7563a4d4d3241be8bd5b052\" returns successfully" Jan 23 01:10:58.918580 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:10:58.918693 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:10:59.064995 kubelet[2490]: I0123 01:10:59.064878 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vgcm7" podStartSLOduration=3.283263691 podStartE2EDuration="17.064791623s" podCreationTimestamp="2026-01-23 01:10:42 +0000 UTC" firstStartedPulling="2026-01-23 01:10:44.850205406 +0000 UTC m=+3.783556965" lastFinishedPulling="2026-01-23 01:10:58.63173335 +0000 UTC m=+17.565084897" observedRunningTime="2026-01-23 01:10:59.06422242 +0000 UTC m=+17.997574001" watchObservedRunningTime="2026-01-23 01:10:59.064791623 +0000 UTC m=+17.998143192" Jan 23 01:10:59.795112 kubelet[2490]: E0123 01:10:59.795048 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:10:59.817586 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:11:00.068365 kubelet[2490]: I0123 01:11:00.068212 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:11:00.795769 kubelet[2490]: E0123 01:11:00.795703 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:00.958211 (udev-worker)[3055]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:00.963712 systemd-networkd[1801]: vxlan.calico: Link UP Jan 23 01:11:00.965325 systemd-networkd[1801]: vxlan.calico: Gained carrier Jan 23 01:11:00.984832 (udev-worker)[3280]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:01.780681 kubelet[2490]: E0123 01:11:01.780619 2490 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:01.796939 kubelet[2490]: E0123 01:11:01.796813 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:02.210501 systemd-networkd[1801]: vxlan.calico: Gained IPv6LL Jan 23 01:11:02.797056 kubelet[2490]: E0123 01:11:02.796996 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:03.798220 kubelet[2490]: E0123 01:11:03.798166 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:04.798580 kubelet[2490]: E0123 01:11:04.798523 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:05.193955 ntpd[2223]: Listen normally on 6 vxlan.calico 192.168.46.0:123 Jan 23 01:11:05.194558 ntpd[2223]: 23 Jan 01:11:05 ntpd[2223]: Listen normally on 6 vxlan.calico 192.168.46.0:123 Jan 23 01:11:05.194558 ntpd[2223]: 23 Jan 01:11:05 ntpd[2223]: Listen normally on 7 vxlan.calico [fe80::649a:e5ff:fe4e:1fc5%3]:123 Jan 23 01:11:05.194010 ntpd[2223]: Listen normally on 7 vxlan.calico [fe80::649a:e5ff:fe4e:1fc5%3]:123 Jan 23 01:11:05.799657 kubelet[2490]: E0123 01:11:05.799602 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:06.800399 kubelet[2490]: E0123 01:11:06.800331 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:07.801086 kubelet[2490]: E0123 01:11:07.801042 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:07.934765 containerd[1969]: time="2026-01-23T01:11:07.934722409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgbcs,Uid:4c3cd778-85af-4d2a-a9f4-071f6d9e5f64,Namespace:calico-system,Attempt:0,}" Jan 23 01:11:08.244693 systemd-networkd[1801]: cali5ba8cf88d07: Link UP Jan 23 01:11:08.246958 systemd-networkd[1801]: cali5ba8cf88d07: Gained carrier Jan 23 01:11:08.251898 (udev-worker)[3352]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:08.269870 containerd[1969]: 2026-01-23 01:11:08.061 [INFO][3332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.229-k8s-csi--node--driver--hgbcs-eth0 csi-node-driver- calico-system 4c3cd778-85af-4d2a-a9f4-071f6d9e5f64 1090 0 2026-01-23 01:10:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.20.229 csi-node-driver-hgbcs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5ba8cf88d07 [] [] }} ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-" Jan 23 01:11:08.269870 containerd[1969]: 2026-01-23 01:11:08.063 [INFO][3332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.269870 containerd[1969]: 2026-01-23 01:11:08.175 [INFO][3344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" HandleID="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Workload="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.175 [INFO][3344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" HandleID="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Workload="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011dd10), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.20.229", "pod":"csi-node-driver-hgbcs", "timestamp":"2026-01-23 01:11:08.175151857 +0000 UTC"}, Hostname:"172.31.20.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.175 [INFO][3344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.175 [INFO][3344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.175 [INFO][3344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.229' Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.187 [INFO][3344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" host="172.31.20.229" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.198 [INFO][3344] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.229" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.204 [INFO][3344] ipam/ipam.go 511: Trying affinity for 192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.207 [INFO][3344] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.211 [INFO][3344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:08.270170 containerd[1969]: 2026-01-23 01:11:08.211 [INFO][3344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.0/26 handle="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" host="172.31.20.229" Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.213 [INFO][3344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2 Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.223 [INFO][3344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.0/26 handle="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" host="172.31.20.229" Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.234 [INFO][3344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.1/26] block=192.168.46.0/26 handle="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" host="172.31.20.229" Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.235 [INFO][3344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.1/26] handle="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" host="172.31.20.229" Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.235 [INFO][3344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:08.271997 containerd[1969]: 2026-01-23 01:11:08.235 [INFO][3344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.1/26] IPv6=[] ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" HandleID="k8s-pod-network.7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Workload="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.272269 containerd[1969]: 2026-01-23 01:11:08.237 [INFO][3332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-csi--node--driver--hgbcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"", Pod:"csi-node-driver-hgbcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ba8cf88d07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:08.273356 containerd[1969]: 2026-01-23 01:11:08.237 [INFO][3332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.1/32] ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.273356 containerd[1969]: 2026-01-23 01:11:08.237 [INFO][3332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ba8cf88d07 ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.273356 containerd[1969]: 2026-01-23 01:11:08.248 [INFO][3332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.273498 containerd[1969]: 2026-01-23 01:11:08.248 [INFO][3332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-csi--node--driver--hgbcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c3cd778-85af-4d2a-a9f4-071f6d9e5f64", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2", Pod:"csi-node-driver-hgbcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5ba8cf88d07", MAC:"e2:29:0c:b5:8c:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:08.273595 containerd[1969]: 2026-01-23 01:11:08.265 [INFO][3332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" Namespace="calico-system" Pod="csi-node-driver-hgbcs" WorkloadEndpoint="172.31.20.229-k8s-csi--node--driver--hgbcs-eth0" Jan 23 01:11:08.338046 containerd[1969]: time="2026-01-23T01:11:08.338003295Z" level=info msg="connecting to shim 7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2" address="unix:///run/containerd/s/027e5de0216634307df2e04120af1a9647aab5a6b4f14aca49a977f7c5b8e1bc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:08.365524 systemd[1]: Started cri-containerd-7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2.scope - libcontainer container 7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2. Jan 23 01:11:08.392615 containerd[1969]: time="2026-01-23T01:11:08.392541782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgbcs,Uid:4c3cd778-85af-4d2a-a9f4-071f6d9e5f64,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ef89f340a133496d2b78584347cfaef54cf7156e493876671b72411dd0006d2\"" Jan 23 01:11:08.394582 containerd[1969]: time="2026-01-23T01:11:08.394519101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:08.638745 containerd[1969]: time="2026-01-23T01:11:08.638687774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:08.641219 containerd[1969]: time="2026-01-23T01:11:08.641088931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:08.641219 containerd[1969]: time="2026-01-23T01:11:08.641157076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:08.641608 kubelet[2490]: E0123 01:11:08.641562 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:08.642007 kubelet[2490]: E0123 01:11:08.641612 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:08.642007 kubelet[2490]: E0123 01:11:08.641716 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:08.643144 containerd[1969]: time="2026-01-23T01:11:08.643108956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:08.802439 kubelet[2490]: E0123 01:11:08.802211 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:08.895255 containerd[1969]: time="2026-01-23T01:11:08.895133394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:08.897412 containerd[1969]: time="2026-01-23T01:11:08.897362206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:08.897532 containerd[1969]: time="2026-01-23T01:11:08.897451435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:08.897774 kubelet[2490]: E0123 01:11:08.897729 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:08.897855 kubelet[2490]: E0123 01:11:08.897782 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:08.897912 kubelet[2490]: E0123 01:11:08.897887 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:08.898190 kubelet[2490]: E0123 01:11:08.898153 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:09.100687 kubelet[2490]: E0123 01:11:09.100641 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:09.569197 systemd-networkd[1801]: cali5ba8cf88d07: Gained IPv6LL Jan 23 01:11:09.803154 kubelet[2490]: E0123 01:11:09.803097 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:09.934124 containerd[1969]: time="2026-01-23T01:11:09.934072888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dskvs,Uid:2405efdb-84ab-4289-8edd-5b140fdebe83,Namespace:default,Attempt:0,}" Jan 23 01:11:10.070785 systemd-networkd[1801]: calid0bd8258ea0: Link UP Jan 23 01:11:10.073330 systemd-networkd[1801]: calid0bd8258ea0: Gained carrier Jan 23 01:11:10.092414 containerd[1969]: 2026-01-23 01:11:09.977 [INFO][3412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0 nginx-deployment-bb8f74bfb- default 2405efdb-84ab-4289-8edd-5b140fdebe83 1196 0 2026-01-23 01:10:57 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.229 nginx-deployment-bb8f74bfb-dskvs eth0 default [] [] [kns.default ksa.default.default] calid0bd8258ea0 [] [] }} ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-" Jan 23 01:11:10.092414 containerd[1969]: 2026-01-23 01:11:09.977 [INFO][3412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.092414 containerd[1969]: 2026-01-23 01:11:10.010 [INFO][3423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" HandleID="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Workload="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.010 [INFO][3423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" HandleID="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Workload="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.229", "pod":"nginx-deployment-bb8f74bfb-dskvs", "timestamp":"2026-01-23 01:11:10.010251278 +0000 UTC"}, Hostname:"172.31.20.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.010 [INFO][3423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.010 [INFO][3423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.010 [INFO][3423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.229' Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.025 [INFO][3423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" host="172.31.20.229" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.032 [INFO][3423] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.229" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.039 [INFO][3423] ipam/ipam.go 511: Trying affinity for 192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.042 [INFO][3423] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.045 [INFO][3423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:10.092715 containerd[1969]: 2026-01-23 01:11:10.045 [INFO][3423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.0/26 handle="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" host="172.31.20.229" Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.047 [INFO][3423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337 Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.052 [INFO][3423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.0/26 handle="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" host="172.31.20.229" Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.062 [INFO][3423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.2/26] block=192.168.46.0/26 handle="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" host="172.31.20.229" Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.062 [INFO][3423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.2/26] handle="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" host="172.31.20.229" Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.062 [INFO][3423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:10.093124 containerd[1969]: 2026-01-23 01:11:10.062 [INFO][3423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.2/26] IPv6=[] ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" HandleID="k8s-pod-network.10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Workload="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.093361 containerd[1969]: 2026-01-23 01:11:10.064 [INFO][3412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2405efdb-84ab-4289-8edd-5b140fdebe83", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-dskvs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.46.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0bd8258ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:10.093361 containerd[1969]: 2026-01-23 01:11:10.065 [INFO][3412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.2/32] ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.093488 containerd[1969]: 2026-01-23 01:11:10.065 [INFO][3412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0bd8258ea0 ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.093488 containerd[1969]: 2026-01-23 01:11:10.074 [INFO][3412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.093562 containerd[1969]: 2026-01-23 01:11:10.076 [INFO][3412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2405efdb-84ab-4289-8edd-5b140fdebe83", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 10, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337", Pod:"nginx-deployment-bb8f74bfb-dskvs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.46.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0bd8258ea0", MAC:"76:55:01:b4:2e:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:10.093645 containerd[1969]: 2026-01-23 01:11:10.086 [INFO][3412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" Namespace="default" Pod="nginx-deployment-bb8f74bfb-dskvs" WorkloadEndpoint="172.31.20.229-k8s-nginx--deployment--bb8f74bfb--dskvs-eth0" Jan 23 01:11:10.104949 kubelet[2490]: E0123 01:11:10.104794 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:10.147015 containerd[1969]: time="2026-01-23T01:11:10.146937061Z" level=info msg="connecting to shim 10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337" address="unix:///run/containerd/s/45a64bac464f92bcc176f50003b095e531c40be177f4ff71cc0576372bbb0cd6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:10.180757 systemd[1]: Started cri-containerd-10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337.scope - libcontainer container 10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337. Jan 23 01:11:10.247180 containerd[1969]: time="2026-01-23T01:11:10.247064960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-dskvs,Uid:2405efdb-84ab-4289-8edd-5b140fdebe83,Namespace:default,Attempt:0,} returns sandbox id \"10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337\"" Jan 23 01:11:10.249222 containerd[1969]: time="2026-01-23T01:11:10.249055795Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:11:10.803676 kubelet[2490]: E0123 01:11:10.803613 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:11.169372 systemd-networkd[1801]: calid0bd8258ea0: Gained IPv6LL Jan 23 01:11:11.804114 kubelet[2490]: E0123 01:11:11.803969 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:12.805297 kubelet[2490]: E0123 01:11:12.804966 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:12.897905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363155104.mount: Deactivated successfully. Jan 23 01:11:13.194638 ntpd[2223]: Listen normally on 8 cali5ba8cf88d07 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:11:13.195226 ntpd[2223]: 23 Jan 01:11:13 ntpd[2223]: Listen normally on 8 cali5ba8cf88d07 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 01:11:13.195226 ntpd[2223]: 23 Jan 01:11:13 ntpd[2223]: Listen normally on 9 calid0bd8258ea0 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:11:13.194697 ntpd[2223]: Listen normally on 9 calid0bd8258ea0 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 01:11:13.805965 kubelet[2490]: E0123 01:11:13.805926 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:13.814326 update_engine[1938]: I20260123 01:11:13.813783 1938 update_attempter.cc:509] Updating boot flags... Jan 23 01:11:14.137640 containerd[1969]: time="2026-01-23T01:11:14.137589482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:14.141746 containerd[1969]: time="2026-01-23T01:11:14.141698173Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 23 01:11:14.148333 containerd[1969]: time="2026-01-23T01:11:14.146566137Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:14.164442 containerd[1969]: time="2026-01-23T01:11:14.164396043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:14.171022 containerd[1969]: time="2026-01-23T01:11:14.169967964Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.920873897s" Jan 23 01:11:14.171503 containerd[1969]: time="2026-01-23T01:11:14.171457080Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:11:14.183308 containerd[1969]: time="2026-01-23T01:11:14.182917092Z" level=info msg="CreateContainer within sandbox \"10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 01:11:14.212348 containerd[1969]: time="2026-01-23T01:11:14.207742294Z" level=info msg="Container 78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:14.214189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304773130.mount: Deactivated successfully. Jan 23 01:11:14.229419 containerd[1969]: time="2026-01-23T01:11:14.229370841Z" level=info msg="CreateContainer within sandbox \"10cae5c8bc0cc98724d5b7ef1bc9b700349b90dbe9e255e42152f3e2f0d4d337\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705\"" Jan 23 01:11:14.230091 containerd[1969]: time="2026-01-23T01:11:14.229922027Z" level=info msg="StartContainer for \"78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705\"" Jan 23 01:11:14.231801 containerd[1969]: time="2026-01-23T01:11:14.231766228Z" level=info msg="connecting to shim 78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705" address="unix:///run/containerd/s/45a64bac464f92bcc176f50003b095e531c40be177f4ff71cc0576372bbb0cd6" protocol=ttrpc version=3 Jan 23 01:11:14.277575 systemd[1]: Started cri-containerd-78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705.scope - libcontainer container 78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705. Jan 23 01:11:14.450612 containerd[1969]: time="2026-01-23T01:11:14.446918946Z" level=info msg="StartContainer for \"78ea003929ef0ec6a78db2d33e422232dcbf8391609ab2e2ef06d7cb80f99705\" returns successfully" Jan 23 01:11:14.806988 kubelet[2490]: E0123 01:11:14.806854 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:15.807756 kubelet[2490]: E0123 01:11:15.807704 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:16.808226 kubelet[2490]: E0123 01:11:16.808151 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:17.809313 kubelet[2490]: E0123 01:11:17.809240 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:18.810366 kubelet[2490]: E0123 01:11:18.810300 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:19.810956 kubelet[2490]: E0123 01:11:19.810866 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:20.547821 kubelet[2490]: I0123 01:11:20.547757 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-dskvs" podStartSLOduration=19.621341539 podStartE2EDuration="23.547731194s" podCreationTimestamp="2026-01-23 01:10:57 +0000 UTC" firstStartedPulling="2026-01-23 01:11:10.248389421 +0000 UTC m=+29.181740980" lastFinishedPulling="2026-01-23 01:11:14.174779072 +0000 UTC m=+33.108130635" observedRunningTime="2026-01-23 01:11:15.140843081 +0000 UTC m=+34.074194652" watchObservedRunningTime="2026-01-23 01:11:20.547731194 +0000 UTC m=+39.481082765" Jan 23 01:11:20.566843 systemd[1]: Created slice kubepods-besteffort-pod966d6943_afd1_4256_8989_173755d34f1e.slice - libcontainer container kubepods-besteffort-pod966d6943_afd1_4256_8989_173755d34f1e.slice. Jan 23 01:11:20.707552 kubelet[2490]: I0123 01:11:20.707472 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmwm\" (UniqueName: \"kubernetes.io/projected/966d6943-afd1-4256-8989-173755d34f1e-kube-api-access-nnmwm\") pod \"nfs-server-provisioner-0\" (UID: \"966d6943-afd1-4256-8989-173755d34f1e\") " pod="default/nfs-server-provisioner-0" Jan 23 01:11:20.707552 kubelet[2490]: I0123 01:11:20.707558 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/966d6943-afd1-4256-8989-173755d34f1e-data\") pod \"nfs-server-provisioner-0\" (UID: \"966d6943-afd1-4256-8989-173755d34f1e\") " pod="default/nfs-server-provisioner-0" Jan 23 01:11:20.811689 kubelet[2490]: E0123 01:11:20.811578 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:20.874079 containerd[1969]: time="2026-01-23T01:11:20.874032242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:966d6943-afd1-4256-8989-173755d34f1e,Namespace:default,Attempt:0,}" Jan 23 01:11:21.010671 systemd-networkd[1801]: cali60e51b789ff: Link UP Jan 23 01:11:21.012935 systemd-networkd[1801]: cali60e51b789ff: Gained carrier Jan 23 01:11:21.016137 (udev-worker)[3783]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:21.027423 containerd[1969]: 2026-01-23 01:11:20.925 [INFO][3767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.229-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 966d6943-afd1-4256-8989-173755d34f1e 1351 0 2026-01-23 01:11:20 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.20.229 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-" Jan 23 01:11:21.027423 containerd[1969]: 2026-01-23 01:11:20.925 [INFO][3767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.027423 containerd[1969]: 2026-01-23 01:11:20.956 [INFO][3775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" HandleID="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Workload="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.956 [INFO][3775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" HandleID="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Workload="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.229", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-23 01:11:20.956436278 +0000 UTC"}, Hostname:"172.31.20.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.956 [INFO][3775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.956 [INFO][3775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.956 [INFO][3775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.229' Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.966 [INFO][3775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" host="172.31.20.229" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.974 [INFO][3775] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.229" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.982 [INFO][3775] ipam/ipam.go 511: Trying affinity for 192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.985 [INFO][3775] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.988 [INFO][3775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:21.027708 containerd[1969]: 2026-01-23 01:11:20.988 [INFO][3775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.0/26 handle="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" host="172.31.20.229" Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:20.990 [INFO][3775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3 Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:20.995 [INFO][3775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.0/26 handle="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" host="172.31.20.229" Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:21.004 [INFO][3775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.3/26] block=192.168.46.0/26 handle="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" host="172.31.20.229" Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:21.005 [INFO][3775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.3/26] handle="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" host="172.31.20.229" Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:21.005 [INFO][3775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:21.028113 containerd[1969]: 2026-01-23 01:11:21.005 [INFO][3775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.3/26] IPv6=[] ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" HandleID="k8s-pod-network.cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Workload="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.032432 containerd[1969]: 2026-01-23 01:11:21.007 [INFO][3767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"966d6943-afd1-4256-8989-173755d34f1e", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.46.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:21.032432 containerd[1969]: 2026-01-23 01:11:21.007 [INFO][3767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.3/32] ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.032432 containerd[1969]: 2026-01-23 01:11:21.007 [INFO][3767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.032432 containerd[1969]: 2026-01-23 01:11:21.011 [INFO][3767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.032677 containerd[1969]: 2026-01-23 01:11:21.012 [INFO][3767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"966d6943-afd1-4256-8989-173755d34f1e", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.46.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1e:f3:18:be:ae:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:21.032677 containerd[1969]: 2026-01-23 01:11:21.022 [INFO][3767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.20.229-k8s-nfs--server--provisioner--0-eth0" Jan 23 01:11:21.081954 containerd[1969]: time="2026-01-23T01:11:21.081816307Z" level=info msg="connecting to shim cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3" address="unix:///run/containerd/s/d1eaff9a3390b9d895cc497ba37637b099e138c128a539975c33424417818dfc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:21.113502 systemd[1]: Started cri-containerd-cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3.scope - libcontainer container cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3. Jan 23 01:11:21.172439 containerd[1969]: time="2026-01-23T01:11:21.172385176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:966d6943-afd1-4256-8989-173755d34f1e,Namespace:default,Attempt:0,} returns sandbox id \"cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3\"" Jan 23 01:11:21.176215 containerd[1969]: time="2026-01-23T01:11:21.176165205Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 01:11:21.781163 kubelet[2490]: E0123 01:11:21.781054 2490 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:21.811909 kubelet[2490]: E0123 01:11:21.811851 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:22.625561 systemd-networkd[1801]: cali60e51b789ff: Gained IPv6LL Jan 23 01:11:22.812405 kubelet[2490]: E0123 01:11:22.812344 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:23.746940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453295403.mount: Deactivated successfully. Jan 23 01:11:23.813019 kubelet[2490]: E0123 01:11:23.812946 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:24.813391 kubelet[2490]: E0123 01:11:24.813343 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:25.193875 ntpd[2223]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:11:25.194341 ntpd[2223]: 23 Jan 01:11:25 ntpd[2223]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 01:11:25.814423 kubelet[2490]: E0123 01:11:25.814267 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:25.855673 containerd[1969]: time="2026-01-23T01:11:25.855541641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:25.857811 containerd[1969]: time="2026-01-23T01:11:25.857502586Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 23 01:11:25.860267 containerd[1969]: time="2026-01-23T01:11:25.860223295Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:25.865480 containerd[1969]: time="2026-01-23T01:11:25.865424231Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.689217583s" Jan 23 01:11:25.865480 containerd[1969]: time="2026-01-23T01:11:25.865462936Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 23 01:11:25.867644 containerd[1969]: time="2026-01-23T01:11:25.867455939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:25.876628 containerd[1969]: time="2026-01-23T01:11:25.876467154Z" level=info msg="CreateContainer within sandbox \"cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 01:11:25.879962 containerd[1969]: time="2026-01-23T01:11:25.879914029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:25.900993 containerd[1969]: time="2026-01-23T01:11:25.900026663Z" level=info msg="Container 2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:25.915631 containerd[1969]: time="2026-01-23T01:11:25.915583091Z" level=info msg="CreateContainer within sandbox \"cdd1e31d9bf61ea252af79e96f207c38a2f104c3396b112a7add53e94415cec3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1\"" Jan 23 01:11:25.916236 containerd[1969]: time="2026-01-23T01:11:25.916206927Z" level=info msg="StartContainer for \"2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1\"" Jan 23 01:11:25.917378 containerd[1969]: time="2026-01-23T01:11:25.917345538Z" level=info msg="connecting to shim 2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1" address="unix:///run/containerd/s/d1eaff9a3390b9d895cc497ba37637b099e138c128a539975c33424417818dfc" protocol=ttrpc version=3 Jan 23 01:11:25.943515 systemd[1]: Started cri-containerd-2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1.scope - libcontainer container 2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1. Jan 23 01:11:25.988177 containerd[1969]: time="2026-01-23T01:11:25.988136913Z" level=info msg="StartContainer for \"2e147f18e9a8c5e22b7d23e9e4a18c24a020081b9fd383d709739482bfb0bfb1\" returns successfully" Jan 23 01:11:26.112777 containerd[1969]: time="2026-01-23T01:11:26.112733841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:26.115260 containerd[1969]: time="2026-01-23T01:11:26.115205700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:26.115516 containerd[1969]: time="2026-01-23T01:11:26.115330093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:26.117745 kubelet[2490]: E0123 01:11:26.117666 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:26.117745 kubelet[2490]: E0123 01:11:26.117733 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:26.118272 kubelet[2490]: E0123 01:11:26.117823 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:26.119274 containerd[1969]: time="2026-01-23T01:11:26.119197658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:26.184131 kubelet[2490]: I0123 01:11:26.184067 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.493260705 podStartE2EDuration="6.184050375s" podCreationTimestamp="2026-01-23 01:11:20 +0000 UTC" firstStartedPulling="2026-01-23 01:11:21.175719846 +0000 UTC m=+40.109071405" lastFinishedPulling="2026-01-23 01:11:25.866509529 +0000 UTC m=+44.799861075" observedRunningTime="2026-01-23 01:11:26.183652833 +0000 UTC m=+45.117004404" watchObservedRunningTime="2026-01-23 01:11:26.184050375 +0000 UTC m=+45.117401943" Jan 23 01:11:26.431377 containerd[1969]: time="2026-01-23T01:11:26.431211472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:26.434271 containerd[1969]: time="2026-01-23T01:11:26.434027759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:26.434271 containerd[1969]: time="2026-01-23T01:11:26.434049481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:26.434530 kubelet[2490]: E0123 01:11:26.434415 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:26.434530 kubelet[2490]: E0123 01:11:26.434453 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:26.434633 kubelet[2490]: E0123 01:11:26.434545 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:26.434633 kubelet[2490]: E0123 01:11:26.434587 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:26.815726 kubelet[2490]: E0123 01:11:26.815602 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:27.816206 kubelet[2490]: E0123 01:11:27.816135 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:28.816370 kubelet[2490]: E0123 01:11:28.816315 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:29.817060 kubelet[2490]: E0123 01:11:29.816984 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:30.817378 kubelet[2490]: E0123 01:11:30.817321 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:31.818531 kubelet[2490]: E0123 01:11:31.818463 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:32.820505 kubelet[2490]: E0123 01:11:32.820450 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:33.820640 kubelet[2490]: E0123 01:11:33.820592 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:34.821377 kubelet[2490]: E0123 01:11:34.821300 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:35.822354 kubelet[2490]: E0123 01:11:35.822303 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:36.823157 kubelet[2490]: E0123 01:11:36.823091 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:37.823672 kubelet[2490]: E0123 01:11:37.823614 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:38.824781 kubelet[2490]: E0123 01:11:38.824737 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:39.825071 kubelet[2490]: E0123 01:11:39.825018 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:39.928504 kubelet[2490]: E0123 01:11:39.928451 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:40.825198 kubelet[2490]: E0123 01:11:40.825144 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:41.781200 kubelet[2490]: E0123 01:11:41.781124 2490 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:41.826374 kubelet[2490]: E0123 01:11:41.826321 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:42.827611 kubelet[2490]: E0123 01:11:42.827481 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:43.828075 kubelet[2490]: E0123 01:11:43.828038 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:44.828736 kubelet[2490]: E0123 01:11:44.828681 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:45.828848 kubelet[2490]: E0123 01:11:45.828780 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:46.273558 systemd[1]: Created slice kubepods-besteffort-pod1a7b7734_f1fd_49d6_8727_17a1e12fca7a.slice - libcontainer container kubepods-besteffort-pod1a7b7734_f1fd_49d6_8727_17a1e12fca7a.slice. Jan 23 01:11:46.378176 kubelet[2490]: I0123 01:11:46.378127 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7890f85-af19-4b73-be11-471892e8ae86\" (UniqueName: \"kubernetes.io/nfs/1a7b7734-f1fd-49d6-8727-17a1e12fca7a-pvc-b7890f85-af19-4b73-be11-471892e8ae86\") pod \"test-pod-1\" (UID: \"1a7b7734-f1fd-49d6-8727-17a1e12fca7a\") " pod="default/test-pod-1" Jan 23 01:11:46.378176 kubelet[2490]: I0123 01:11:46.378172 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2vz\" (UniqueName: \"kubernetes.io/projected/1a7b7734-f1fd-49d6-8727-17a1e12fca7a-kube-api-access-4l2vz\") pod \"test-pod-1\" (UID: \"1a7b7734-f1fd-49d6-8727-17a1e12fca7a\") " pod="default/test-pod-1" Jan 23 01:11:46.529301 kernel: netfs: FS-Cache loaded Jan 23 01:11:46.599634 kernel: RPC: Registered named UNIX socket transport module. Jan 23 01:11:46.599764 kernel: RPC: Registered udp transport module. Jan 23 01:11:46.599797 kernel: RPC: Registered tcp transport module. Jan 23 01:11:46.599823 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 01:11:46.600506 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 01:11:46.829635 kubelet[2490]: E0123 01:11:46.829211 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:46.848507 kernel: NFS: Registering the id_resolver key type Jan 23 01:11:46.848634 kernel: Key type id_resolver registered Jan 23 01:11:46.848674 kernel: Key type id_legacy registered Jan 23 01:11:46.880833 nfsidmap[4004]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:11:46.882750 nfsidmap[4004]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:11:46.885605 nfsidmap[4005]: libnfsidmap: Unable to determine the NFSv4 domain; Using 'localdomain' as the NFSv4 domain which means UIDs will be mapped to the 'Nobody-User' user defined in /etc/idmapd.conf Jan 23 01:11:46.885801 nfsidmap[4005]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 23 01:11:46.898149 nfsrahead[4007]: setting /var/lib/kubelet/pods/1a7b7734-f1fd-49d6-8727-17a1e12fca7a/volumes/kubernetes.io~nfs/pvc-b7890f85-af19-4b73-be11-471892e8ae86 readahead to 128 Jan 23 01:11:47.179536 containerd[1969]: time="2026-01-23T01:11:47.179418503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1a7b7734-f1fd-49d6-8727-17a1e12fca7a,Namespace:default,Attempt:0,}" Jan 23 01:11:47.325301 systemd-networkd[1801]: cali5ec59c6bf6e: Link UP Jan 23 01:11:47.326814 systemd-networkd[1801]: cali5ec59c6bf6e: Gained carrier Jan 23 01:11:47.326985 (udev-worker)[3999]: Network interface NamePolicy= disabled on kernel command line. Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.233 [INFO][4009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.20.229-k8s-test--pod--1-eth0 default 1a7b7734-f1fd-49d6-8727-17a1e12fca7a 1499 0 2026-01-23 01:11:21 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.20.229 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.233 [INFO][4009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.265 [INFO][4020] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" HandleID="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Workload="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.265 [INFO][4020] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" HandleID="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Workload="172.31.20.229-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f060), Attrs:map[string]string{"namespace":"default", "node":"172.31.20.229", "pod":"test-pod-1", "timestamp":"2026-01-23 01:11:47.26507851 +0000 UTC"}, Hostname:"172.31.20.229", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.265 [INFO][4020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.265 [INFO][4020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.265 [INFO][4020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.20.229' Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.275 [INFO][4020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.282 [INFO][4020] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.290 [INFO][4020] ipam/ipam.go 511: Trying affinity for 192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.294 [INFO][4020] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.298 [INFO][4020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.0/26 host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.298 [INFO][4020] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.46.0/26 handle="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.301 [INFO][4020] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.307 [INFO][4020] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.46.0/26 handle="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.318 [INFO][4020] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.46.4/26] block=192.168.46.0/26 handle="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.318 [INFO][4020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.4/26] handle="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" host="172.31.20.229" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.318 [INFO][4020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.318 [INFO][4020] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.46.4/26] IPv6=[] ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" HandleID="k8s-pod-network.9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Workload="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.354267 containerd[1969]: 2026-01-23 01:11:47.320 [INFO][4009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1a7b7734-f1fd-49d6-8727-17a1e12fca7a", ResourceVersion:"1499", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.46.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.356776 containerd[1969]: 2026-01-23 01:11:47.320 [INFO][4009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.4/32] ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.356776 containerd[1969]: 2026-01-23 01:11:47.320 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.356776 containerd[1969]: 2026-01-23 01:11:47.327 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.356776 containerd[1969]: 2026-01-23 01:11:47.327 [INFO][4009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.20.229-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1a7b7734-f1fd-49d6-8727-17a1e12fca7a", ResourceVersion:"1499", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 11, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.20.229", ContainerID:"9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.46.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ca:bf:45:79:72:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:11:47.356776 containerd[1969]: 2026-01-23 01:11:47.352 [INFO][4009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.20.229-k8s-test--pod--1-eth0" Jan 23 01:11:47.416880 containerd[1969]: time="2026-01-23T01:11:47.416830328Z" level=info msg="connecting to shim 9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d" address="unix:///run/containerd/s/7d6675ac198ae1ce5298cdb15d77a924ed381360783f56ef01f007fd60395b85" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:11:47.452513 systemd[1]: Started cri-containerd-9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d.scope - libcontainer container 9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d. Jan 23 01:11:47.511007 containerd[1969]: time="2026-01-23T01:11:47.510953824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1a7b7734-f1fd-49d6-8727-17a1e12fca7a,Namespace:default,Attempt:0,} returns sandbox id \"9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d\"" Jan 23 01:11:47.512432 containerd[1969]: time="2026-01-23T01:11:47.512398423Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 01:11:47.829656 kubelet[2490]: E0123 01:11:47.829510 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:47.841544 containerd[1969]: time="2026-01-23T01:11:47.841474451Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:11:47.843096 containerd[1969]: time="2026-01-23T01:11:47.843034270Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 01:11:47.845470 containerd[1969]: time="2026-01-23T01:11:47.845431898Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 333.001584ms" Jan 23 01:11:47.845470 containerd[1969]: time="2026-01-23T01:11:47.845464763Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 23 01:11:47.851527 containerd[1969]: time="2026-01-23T01:11:47.851483177Z" level=info msg="CreateContainer within sandbox \"9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 01:11:47.869493 containerd[1969]: time="2026-01-23T01:11:47.869348288Z" level=info msg="Container 7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:11:47.882403 containerd[1969]: time="2026-01-23T01:11:47.882352664Z" level=info msg="CreateContainer within sandbox \"9eef9da7f581bc2a64947bb018cda28dea7f23b4adc0c6e7411a495ea364779d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3\"" Jan 23 01:11:47.883055 containerd[1969]: time="2026-01-23T01:11:47.882989433Z" level=info msg="StartContainer for \"7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3\"" Jan 23 01:11:47.884100 containerd[1969]: time="2026-01-23T01:11:47.884060239Z" level=info msg="connecting to shim 7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3" address="unix:///run/containerd/s/7d6675ac198ae1ce5298cdb15d77a924ed381360783f56ef01f007fd60395b85" protocol=ttrpc version=3 Jan 23 01:11:47.914584 systemd[1]: Started cri-containerd-7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3.scope - libcontainer container 7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3. Jan 23 01:11:47.963106 containerd[1969]: time="2026-01-23T01:11:47.963060521Z" level=info msg="StartContainer for \"7aa510185857b662f6d83a90fe4c4a225417bf5a3e4cecac8f02a4f2ed345da3\" returns successfully" Jan 23 01:11:48.242189 kubelet[2490]: I0123 01:11:48.242118 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=26.907810621 podStartE2EDuration="27.242102749s" podCreationTimestamp="2026-01-23 01:11:21 +0000 UTC" firstStartedPulling="2026-01-23 01:11:47.512094188 +0000 UTC m=+66.445445734" lastFinishedPulling="2026-01-23 01:11:47.846386315 +0000 UTC m=+66.779737862" observedRunningTime="2026-01-23 01:11:48.241672757 +0000 UTC m=+67.175024325" watchObservedRunningTime="2026-01-23 01:11:48.242102749 +0000 UTC m=+67.175454317" Jan 23 01:11:48.830183 kubelet[2490]: E0123 01:11:48.830102 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:48.992855 systemd-networkd[1801]: cali5ec59c6bf6e: Gained IPv6LL Jan 23 01:11:49.831045 kubelet[2490]: E0123 01:11:49.830991 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:50.831559 kubelet[2490]: E0123 01:11:50.831518 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:51.194039 ntpd[2223]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:11:51.194474 ntpd[2223]: 23 Jan 01:11:51 ntpd[2223]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 01:11:51.832352 kubelet[2490]: E0123 01:11:51.832309 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:51.928550 containerd[1969]: time="2026-01-23T01:11:51.928519319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:11:52.218528 containerd[1969]: time="2026-01-23T01:11:52.218422666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:52.220614 containerd[1969]: time="2026-01-23T01:11:52.220547898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:11:52.220729 containerd[1969]: time="2026-01-23T01:11:52.220633018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:11:52.220930 kubelet[2490]: E0123 01:11:52.220889 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:52.221072 kubelet[2490]: E0123 01:11:52.220934 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:11:52.221072 kubelet[2490]: E0123 01:11:52.221039 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:52.221810 containerd[1969]: time="2026-01-23T01:11:52.221784767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:11:52.496910 containerd[1969]: time="2026-01-23T01:11:52.496792265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:11:52.499508 containerd[1969]: time="2026-01-23T01:11:52.499436176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:11:52.499700 containerd[1969]: time="2026-01-23T01:11:52.499523992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:11:52.499751 kubelet[2490]: E0123 01:11:52.499670 2490 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:52.499751 kubelet[2490]: E0123 01:11:52.499706 2490 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:11:52.499832 kubelet[2490]: E0123 01:11:52.499771 2490 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hgbcs_calico-system(4c3cd778-85af-4d2a-a9f4-071f6d9e5f64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:11:52.499832 kubelet[2490]: E0123 01:11:52.499810 2490 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hgbcs" podUID="4c3cd778-85af-4d2a-a9f4-071f6d9e5f64" Jan 23 01:11:52.832881 kubelet[2490]: E0123 01:11:52.832755 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:53.833709 kubelet[2490]: E0123 01:11:53.833628 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:54.834358 kubelet[2490]: E0123 01:11:54.834186 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:55.835482 kubelet[2490]: E0123 01:11:55.835378 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 01:11:56.835901 kubelet[2490]: E0123 01:11:56.835798 2490 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"