Jan 17 00:22:15.984217 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:22:15.984260 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:15.984281 kernel: BIOS-provided physical RAM map: Jan 17 00:22:15.984293 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:22:15.984303 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:22:15.984313 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:22:15.984326 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:22:15.984338 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:22:15.984351 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:22:15.984365 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:22:15.984377 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:22:15.984389 kernel: NX (Execute Disable) protection: active Jan 17 00:22:15.984400 kernel: APIC: Static calls initialized Jan 17 00:22:15.984412 kernel: efi: EFI v2.7 by EDK II Jan 17 00:22:15.984426 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:22:15.984442 kernel: SMBIOS 2.7 present. Jan 17 00:22:15.984456 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:22:15.984469 kernel: Hypervisor detected: KVM Jan 17 00:22:15.984482 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:22:15.984495 kernel: kvm-clock: using sched offset of 4415835600 cycles Jan 17 00:22:15.984509 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:22:15.984522 kernel: tsc: Detected 2499.996 MHz processor Jan 17 00:22:15.984537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:22:15.984551 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:22:15.984566 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:22:15.984585 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:22:15.984600 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:22:15.984615 kernel: Using GB pages for direct mapping Jan 17 00:22:15.984630 kernel: Secure boot disabled Jan 17 00:22:15.984645 kernel: ACPI: Early table checksum verification disabled Jan 17 00:22:15.984659 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:22:15.984674 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:22:15.984809 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:22:15.984825 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:22:15.984845 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:22:15.984859 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:22:15.984874 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:22:15.984889 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:22:15.984904 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:22:15.984920 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:22:15.984941 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:22:15.984957 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:22:15.984972 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:22:15.984987 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:22:15.985002 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:22:15.985016 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:22:15.985029 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:22:15.985043 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:22:15.985060 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:22:15.985073 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:22:15.985087 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:22:15.985102 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:22:15.985116 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:22:15.985131 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:22:15.985147 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:22:15.985161 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:22:15.985174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:22:15.985205 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:22:15.985217 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:22:15.985231 kernel: Zone ranges: Jan 17 00:22:15.985243 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:22:15.985256 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:22:15.985272 kernel: Normal empty Jan 17 00:22:15.985287 kernel: Movable zone start for each node Jan 17 00:22:15.985302 kernel: Early memory node ranges Jan 17 00:22:15.985318 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:22:15.985337 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:22:15.985353 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:22:15.985369 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:22:15.985383 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:22:15.985396 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:22:15.985410 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:22:15.985423 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:22:15.985437 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:22:15.985450 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:22:15.985464 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:22:15.985480 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:22:15.985494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:22:15.985508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:22:15.985521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:22:15.985535 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:22:15.985548 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:22:15.985562 kernel: TSC deadline timer available Jan 17 00:22:15.985575 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:22:15.985588 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:22:15.985605 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:22:15.985619 kernel: Booting paravirtualized kernel on KVM Jan 17 00:22:15.985633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:22:15.985648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:22:15.985662 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:22:15.985676 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:22:15.985689 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:22:15.985703 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:22:15.985716 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:22:15.985736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:15.985751 kernel: random: crng init done Jan 17 00:22:15.985764 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:22:15.985778 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:22:15.985792 kernel: Fallback order for Node 0: 0 Jan 17 00:22:15.985806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:22:15.985821 kernel: Policy zone: DMA32 Jan 17 00:22:15.985835 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:22:15.985853 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 17 00:22:15.985867 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:22:15.985881 kernel: Kernel/User page tables isolation: enabled Jan 17 00:22:15.985895 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:22:15.985909 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:22:15.985924 kernel: Dynamic Preempt: voluntary Jan 17 00:22:15.985939 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:22:15.985954 kernel: rcu: RCU event tracing is enabled. Jan 17 00:22:15.985969 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:22:15.985986 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:22:15.986001 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:22:15.986015 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:22:15.986029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:22:15.986044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:22:15.986058 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:22:15.986073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:22:15.986101 kernel: Console: colour dummy device 80x25 Jan 17 00:22:15.986116 kernel: printk: console [tty0] enabled Jan 17 00:22:15.986132 kernel: printk: console [ttyS0] enabled Jan 17 00:22:15.986147 kernel: ACPI: Core revision 20230628 Jan 17 00:22:15.986163 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:22:15.988213 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:22:15.988237 kernel: x2apic enabled Jan 17 00:22:15.988252 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:22:15.988268 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:22:15.988283 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 17 00:22:15.988304 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:22:15.988318 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:22:15.988333 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:22:15.988347 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:22:15.988362 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:22:15.988376 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:22:15.988405 kernel: RETBleed: Vulnerable Jan 17 00:22:15.988419 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:22:15.988432 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:22:15.988447 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:22:15.988464 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:22:15.988479 kernel: active return thunk: its_return_thunk Jan 17 00:22:15.988493 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:22:15.988507 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:22:15.988521 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:22:15.988535 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:22:15.988549 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:22:15.988564 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:22:15.988578 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:22:15.988592 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:22:15.988607 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:22:15.988624 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:22:15.988639 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:22:15.988654 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:22:15.988669 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:22:15.988694 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:22:15.988708 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:22:15.988722 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:22:15.988736 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:22:15.988749 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:22:15.988763 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:22:15.988776 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:22:15.988795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:22:15.988809 kernel: landlock: Up and running. Jan 17 00:22:15.988823 kernel: SELinux: Initializing. Jan 17 00:22:15.988838 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:22:15.988853 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:22:15.988867 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:22:15.988882 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:22:15.988896 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:22:15.988911 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:22:15.988925 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:22:15.988942 kernel: signal: max sigframe size: 3632 Jan 17 00:22:15.988958 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:22:15.988975 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:22:15.988989 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:22:15.989004 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:22:15.989020 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:22:15.989035 kernel: .... node #0, CPUs: #1 Jan 17 00:22:15.989050 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:22:15.989068 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:22:15.989087 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:22:15.989102 kernel: smpboot: Max logical packages: 1 Jan 17 00:22:15.989118 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 17 00:22:15.989133 kernel: devtmpfs: initialized Jan 17 00:22:15.989148 kernel: x86/mm: Memory block size: 128MB Jan 17 00:22:15.989163 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:22:15.989190 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:22:15.990480 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:22:15.990498 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:22:15.990520 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:22:15.990535 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:22:15.990551 kernel: audit: type=2000 audit(1768609335.451:1): state=initialized audit_enabled=0 res=1 Jan 17 00:22:15.990567 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:22:15.990583 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:22:15.990599 kernel: cpuidle: using governor menu Jan 17 00:22:15.990615 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:22:15.990630 kernel: dca service started, version 1.12.1 Jan 17 00:22:15.990645 kernel: PCI: Using configuration type 1 for base access Jan 17 00:22:15.990665 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:22:15.990681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:22:15.990697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:22:15.990713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:22:15.990729 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:22:15.990744 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:22:15.990759 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:22:15.990774 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:22:15.990786 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:22:15.990801 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:22:15.990816 kernel: ACPI: Interpreter enabled Jan 17 00:22:15.990829 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:22:15.990842 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:22:15.990856 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:22:15.990870 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:22:15.990883 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:22:15.990896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:22:15.991133 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:22:15.991903 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:22:15.992065 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:22:15.992088 kernel: acpiphp: Slot [3] registered Jan 17 00:22:15.992105 kernel: acpiphp: Slot [4] registered Jan 17 00:22:15.992121 kernel: acpiphp: Slot [5] registered Jan 17 00:22:15.992137 kernel: acpiphp: Slot [6] registered Jan 17 00:22:15.992151 kernel: acpiphp: Slot [7] registered Jan 17 00:22:15.992173 kernel: acpiphp: Slot [8] registered Jan 17 00:22:15.992200 kernel: acpiphp: Slot [9] registered Jan 17 00:22:15.992214 kernel: acpiphp: Slot [10] registered Jan 17 00:22:15.993239 kernel: acpiphp: Slot [11] registered Jan 17 00:22:15.993259 kernel: acpiphp: Slot [12] registered Jan 17 00:22:15.993275 kernel: acpiphp: Slot [13] registered Jan 17 00:22:15.993291 kernel: acpiphp: Slot [14] registered Jan 17 00:22:15.993307 kernel: acpiphp: Slot [15] registered Jan 17 00:22:15.993322 kernel: acpiphp: Slot [16] registered Jan 17 00:22:15.993338 kernel: acpiphp: Slot [17] registered Jan 17 00:22:15.993359 kernel: acpiphp: Slot [18] registered Jan 17 00:22:15.993375 kernel: acpiphp: Slot [19] registered Jan 17 00:22:15.993391 kernel: acpiphp: Slot [20] registered Jan 17 00:22:15.993407 kernel: acpiphp: Slot [21] registered Jan 17 00:22:15.993422 kernel: acpiphp: Slot [22] registered Jan 17 00:22:15.993439 kernel: acpiphp: Slot [23] registered Jan 17 00:22:15.993455 kernel: acpiphp: Slot [24] registered Jan 17 00:22:15.993472 kernel: acpiphp: Slot [25] registered Jan 17 00:22:15.993488 kernel: acpiphp: Slot [26] registered Jan 17 00:22:15.993509 kernel: acpiphp: Slot [27] registered Jan 17 00:22:15.993525 kernel: acpiphp: Slot [28] registered Jan 17 00:22:15.993541 kernel: acpiphp: Slot [29] registered Jan 17 00:22:15.993558 kernel: acpiphp: Slot [30] registered Jan 17 00:22:15.993574 kernel: acpiphp: Slot [31] registered Jan 17 00:22:15.993591 kernel: PCI host bridge to bus 0000:00 Jan 17 00:22:15.993793 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:22:15.993936 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:22:15.994077 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:22:15.995264 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:22:15.995414 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:22:15.995539 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:22:15.995701 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:22:15.995851 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:22:15.995996 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:22:15.996139 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:22:16.000465 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:22:16.000677 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:22:16.000856 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:22:16.001021 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:22:16.001233 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:22:16.001364 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:22:16.001544 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:22:16.001698 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:22:16.001830 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:22:16.001960 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:22:16.002093 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:22:16.002264 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:22:16.002436 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:22:16.002612 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:22:16.002753 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:22:16.002773 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:22:16.002789 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:22:16.002803 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:22:16.002816 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:22:16.002830 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:22:16.002849 kernel: iommu: Default domain type: Translated Jan 17 00:22:16.002861 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:22:16.002875 kernel: efivars: Registered efivars operations Jan 17 00:22:16.002888 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:22:16.002901 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:22:16.002918 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:22:16.002932 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:22:16.003078 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:22:16.003297 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:22:16.003445 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:22:16.003465 kernel: vgaarb: loaded Jan 17 00:22:16.003481 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:22:16.003497 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:22:16.003513 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:22:16.003529 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:22:16.003544 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:22:16.003560 kernel: pnp: PnP ACPI init Jan 17 00:22:16.003575 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:22:16.003595 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:22:16.003610 kernel: NET: Registered PF_INET protocol family Jan 17 00:22:16.003626 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:22:16.003642 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:22:16.003658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:22:16.003673 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:22:16.003689 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:22:16.003705 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:22:16.003724 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:22:16.003740 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:22:16.003756 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:22:16.003772 kernel: NET: Registered PF_XDP protocol family Jan 17 00:22:16.003920 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:22:16.004046 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:22:16.004167 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:22:16.004300 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:22:16.004418 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:22:16.004562 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:22:16.004581 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:22:16.004597 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:22:16.004612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:22:16.004627 kernel: clocksource: Switched to clocksource tsc Jan 17 00:22:16.004642 kernel: Initialise system trusted keyrings Jan 17 00:22:16.004656 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:22:16.004671 kernel: Key type asymmetric registered Jan 17 00:22:16.004697 kernel: Asymmetric key parser 'x509' registered Jan 17 00:22:16.004712 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:22:16.004727 kernel: io scheduler mq-deadline registered Jan 17 00:22:16.004741 kernel: io scheduler kyber registered Jan 17 00:22:16.004756 kernel: io scheduler bfq registered Jan 17 00:22:16.004771 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:22:16.004785 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:22:16.004799 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:22:16.004814 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:22:16.004832 kernel: i8042: Warning: Keylock active Jan 17 00:22:16.004846 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:22:16.004860 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:22:16.005009 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:22:16.005135 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:22:16.006768 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:22:15 UTC (1768609335) Jan 17 00:22:16.006965 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:22:16.006987 kernel: intel_pstate: CPU model not supported Jan 17 00:22:16.007013 kernel: efifb: probing for efifb Jan 17 00:22:16.007030 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:22:16.007047 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:22:16.007064 kernel: efifb: scrolling: redraw Jan 17 00:22:16.007081 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:22:16.007100 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:22:16.007116 kernel: fb0: EFI VGA frame buffer device Jan 17 00:22:16.007132 kernel: pstore: Using crash dump compression: deflate Jan 17 00:22:16.007150 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:22:16.007172 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:22:16.008704 kernel: Segment Routing with IPv6 Jan 17 00:22:16.008726 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:22:16.008743 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:22:16.008759 kernel: Key type dns_resolver registered Jan 17 00:22:16.008775 kernel: IPI shorthand broadcast: enabled Jan 17 00:22:16.008820 kernel: sched_clock: Marking stable (494087262, 126810850)->(692258977, -71360865) Jan 17 00:22:16.008838 kernel: registered taskstats version 1 Jan 17 00:22:16.008855 kernel: Loading compiled-in X.509 certificates Jan 17 00:22:16.008921 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:22:16.008938 kernel: Key type .fscrypt registered Jan 17 00:22:16.009043 kernel: Key type fscrypt-provisioning registered Jan 17 00:22:16.009061 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:22:16.009076 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:22:16.009093 kernel: ima: No architecture policies found Jan 17 00:22:16.009108 kernel: clk: Disabling unused clocks Jan 17 00:22:16.012232 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:22:16.012258 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:22:16.012279 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:22:16.012296 kernel: Run /init as init process Jan 17 00:22:16.012313 kernel: with arguments: Jan 17 00:22:16.012330 kernel: /init Jan 17 00:22:16.012346 kernel: with environment: Jan 17 00:22:16.012362 kernel: HOME=/ Jan 17 00:22:16.012379 kernel: TERM=linux Jan 17 00:22:16.012399 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:22:16.012423 systemd[1]: Detected virtualization amazon. Jan 17 00:22:16.012441 systemd[1]: Detected architecture x86-64. Jan 17 00:22:16.012457 systemd[1]: Running in initrd. Jan 17 00:22:16.012474 systemd[1]: No hostname configured, using default hostname. Jan 17 00:22:16.012490 systemd[1]: Hostname set to . Jan 17 00:22:16.012508 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:22:16.012525 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:22:16.012541 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:16.012561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:16.012580 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:22:16.012600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:22:16.012617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:22:16.012638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:22:16.012661 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:22:16.012679 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:22:16.012706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:16.012723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:16.012740 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:22:16.012757 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:22:16.012775 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:22:16.012795 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:22:16.012813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:22:16.012830 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:22:16.012848 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:22:16.012865 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:22:16.012881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:16.012896 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:16.012912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:16.012927 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:22:16.012946 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:22:16.012964 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:22:16.012980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:22:16.012996 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:22:16.013013 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:22:16.013030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:22:16.013049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:16.013068 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:22:16.013087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:16.013160 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:22:16.014398 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:22:16.014426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:22:16.014443 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:22:16.014465 systemd-journald[179]: Journal started Jan 17 00:22:16.014501 systemd-journald[179]: Runtime Journal (/run/log/journal/ec21455657f5fd1702a6098aea7e765c) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:22:16.014585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:16.017006 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:22:16.022204 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:22:16.024013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:22:16.035549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:16.040391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:22:16.044366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:22:16.071212 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:22:16.074663 kernel: Bridge firewalling registered Jan 17 00:22:16.073495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:16.073708 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:22:16.077807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:16.080119 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:16.089427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:22:16.093392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:16.094538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:16.106657 dracut-cmdline[207]: dracut-dracut-053 Jan 17 00:22:16.111620 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:22:16.114963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:16.127021 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:22:16.169188 systemd-resolved[230]: Positive Trust Anchors: Jan 17 00:22:16.170252 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:22:16.170320 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:22:16.176633 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 17 00:22:16.180276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:22:16.181802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:16.209218 kernel: SCSI subsystem initialized Jan 17 00:22:16.219216 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:22:16.230211 kernel: iscsi: registered transport (tcp) Jan 17 00:22:16.252248 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:22:16.252333 kernel: QLogic iSCSI HBA Driver Jan 17 00:22:16.292212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:22:16.298382 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:22:16.325479 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:22:16.325572 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:22:16.325596 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:22:16.369214 kernel: raid6: avx512x4 gen() 17986 MB/s Jan 17 00:22:16.387225 kernel: raid6: avx512x2 gen() 17969 MB/s Jan 17 00:22:16.405214 kernel: raid6: avx512x1 gen() 18019 MB/s Jan 17 00:22:16.423212 kernel: raid6: avx2x4 gen() 18105 MB/s Jan 17 00:22:16.441217 kernel: raid6: avx2x2 gen() 17925 MB/s Jan 17 00:22:16.459475 kernel: raid6: avx2x1 gen() 13526 MB/s Jan 17 00:22:16.459547 kernel: raid6: using algorithm avx2x4 gen() 18105 MB/s Jan 17 00:22:16.478467 kernel: raid6: .... xor() 7063 MB/s, rmw enabled Jan 17 00:22:16.478534 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:22:16.501218 kernel: xor: automatically using best checksumming function avx Jan 17 00:22:16.664227 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:22:16.674719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:22:16.680450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:16.706299 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 17 00:22:16.711852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:16.720050 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:22:16.743496 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 17 00:22:16.775734 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:22:16.782438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:22:16.837308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:16.846509 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:22:16.880710 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:22:16.885165 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:22:16.886775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:16.887472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:22:16.895484 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:22:16.927782 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:22:16.938208 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:22:16.938500 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:22:16.942209 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:22:16.946605 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:22:16.954281 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ce:ac:a3:f6:31 Jan 17 00:22:16.969241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:22:16.970268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:16.973043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:16.975094 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:16.975259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:16.975490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:16.976134 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:16.985526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:16.993118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:16.994665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:17.006203 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:22:17.006492 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:22:17.009535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:17.020439 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:22:17.020473 kernel: AES CTR mode by8 optimization enabled Jan 17 00:22:17.023226 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:22:17.032333 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:22:17.032390 kernel: GPT:9289727 != 33554431 Jan 17 00:22:17.032403 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:22:17.034043 kernel: GPT:9289727 != 33554431 Jan 17 00:22:17.034090 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:22:17.034111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:22:17.046799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:17.056424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:22:17.085407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:17.136230 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 17 00:22:17.150202 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (454) Jan 17 00:22:17.214132 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:22:17.234799 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:22:17.241013 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:22:17.241604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:22:17.249102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:22:17.265498 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:22:17.272661 disk-uuid[629]: Primary Header is updated. Jan 17 00:22:17.272661 disk-uuid[629]: Secondary Entries is updated. Jan 17 00:22:17.272661 disk-uuid[629]: Secondary Header is updated. Jan 17 00:22:17.281230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:22:17.288262 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:22:17.301212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:22:18.301447 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:22:18.301534 disk-uuid[630]: The operation has completed successfully. Jan 17 00:22:18.447968 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:22:18.448120 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:22:18.470430 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:22:18.475372 sh[973]: Success Jan 17 00:22:18.498629 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:22:18.612113 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:22:18.625052 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:22:18.629840 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:22:18.662245 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:22:18.662311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:18.662326 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:22:18.664206 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:22:18.665500 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:22:18.806217 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:22:18.819960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:22:18.821396 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:22:18.829429 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:22:18.832398 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:22:18.860576 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:18.860652 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:18.864299 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:22:18.880234 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:22:18.893816 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:22:18.895916 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:18.902623 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:22:18.909365 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:22:18.951832 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:22:18.961428 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:22:18.982124 systemd-networkd[1165]: lo: Link UP Jan 17 00:22:18.982138 systemd-networkd[1165]: lo: Gained carrier Jan 17 00:22:18.983850 systemd-networkd[1165]: Enumeration completed Jan 17 00:22:18.984322 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:18.984327 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:22:18.985538 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:22:18.986871 systemd[1]: Reached target network.target - Network. Jan 17 00:22:18.987643 systemd-networkd[1165]: eth0: Link UP Jan 17 00:22:18.987649 systemd-networkd[1165]: eth0: Gained carrier Jan 17 00:22:18.987662 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:18.998292 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.25.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:22:19.253103 ignition[1104]: Ignition 2.19.0 Jan 17 00:22:19.253117 ignition[1104]: Stage: fetch-offline Jan 17 00:22:19.253330 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:19.253340 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:19.253690 ignition[1104]: Ignition finished successfully Jan 17 00:22:19.255519 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:22:19.260441 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:22:19.276271 ignition[1175]: Ignition 2.19.0 Jan 17 00:22:19.276286 ignition[1175]: Stage: fetch Jan 17 00:22:19.276954 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:19.276969 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:19.277094 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:19.286075 ignition[1175]: PUT result: OK Jan 17 00:22:19.288846 ignition[1175]: parsed url from cmdline: "" Jan 17 00:22:19.288859 ignition[1175]: no config URL provided Jan 17 00:22:19.288868 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:22:19.288888 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:22:19.288907 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:19.290404 ignition[1175]: PUT result: OK Jan 17 00:22:19.290453 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:22:19.291350 ignition[1175]: GET result: OK Jan 17 00:22:19.291408 ignition[1175]: parsing config with SHA512: c772447c17d46e82b69bdff843b6a6da0e9e9cc034543eeb125953f6c09d5be2604ff8f74f97208cf71697eb6b6374bb0b1dde95c070acb0f6a09f75b7801e0e Jan 17 00:22:19.294085 unknown[1175]: fetched base config from "system" Jan 17 00:22:19.294099 unknown[1175]: fetched base config from "system" Jan 17 00:22:19.294351 ignition[1175]: fetch: fetch complete Jan 17 00:22:19.294105 unknown[1175]: fetched user config from "aws" Jan 17 00:22:19.294356 ignition[1175]: fetch: fetch passed Jan 17 00:22:19.296617 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:22:19.294394 ignition[1175]: Ignition finished successfully Jan 17 00:22:19.303417 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:22:19.320056 ignition[1181]: Ignition 2.19.0 Jan 17 00:22:19.320069 ignition[1181]: Stage: kargs Jan 17 00:22:19.320551 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:19.320565 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:19.320869 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:19.321917 ignition[1181]: PUT result: OK Jan 17 00:22:19.324526 ignition[1181]: kargs: kargs passed Jan 17 00:22:19.324587 ignition[1181]: Ignition finished successfully Jan 17 00:22:19.326290 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:22:19.331443 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:22:19.347579 ignition[1187]: Ignition 2.19.0 Jan 17 00:22:19.347593 ignition[1187]: Stage: disks Jan 17 00:22:19.348051 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:19.348065 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:19.348206 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:19.349635 ignition[1187]: PUT result: OK Jan 17 00:22:19.352850 ignition[1187]: disks: disks passed Jan 17 00:22:19.352913 ignition[1187]: Ignition finished successfully Jan 17 00:22:19.354259 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:22:19.355411 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:22:19.356093 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:22:19.356485 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:22:19.357264 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:22:19.357869 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:22:19.363418 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:22:19.405147 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:22:19.408563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:22:19.413320 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:22:19.523237 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:22:19.523675 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:22:19.524633 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:22:19.544379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:22:19.547677 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:22:19.549594 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:22:19.549671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:22:19.549707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:22:19.566229 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1214) Jan 17 00:22:19.572681 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:19.572744 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:19.572768 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:22:19.573441 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:22:19.576282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:22:19.594241 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:22:19.596061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:22:19.974780 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:22:20.007216 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:22:20.013033 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:22:20.017785 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:22:20.344953 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:22:20.349340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:22:20.357594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:22:20.365731 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:22:20.368200 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:20.404043 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:22:20.410040 ignition[1327]: INFO : Ignition 2.19.0 Jan 17 00:22:20.410040 ignition[1327]: INFO : Stage: mount Jan 17 00:22:20.411586 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:20.411586 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:20.411586 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:20.413226 ignition[1327]: INFO : PUT result: OK Jan 17 00:22:20.414678 ignition[1327]: INFO : mount: mount passed Jan 17 00:22:20.415903 ignition[1327]: INFO : Ignition finished successfully Jan 17 00:22:20.417081 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:22:20.423322 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:22:20.441542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:22:20.475325 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1340) Jan 17 00:22:20.481382 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:22:20.481465 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:22:20.484033 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:22:20.492243 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:22:20.494273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:22:20.523733 ignition[1357]: INFO : Ignition 2.19.0 Jan 17 00:22:20.524473 ignition[1357]: INFO : Stage: files Jan 17 00:22:20.525447 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:20.526132 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:20.526132 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:20.527102 ignition[1357]: INFO : PUT result: OK Jan 17 00:22:20.529525 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:22:20.543212 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:22:20.543212 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:22:20.570104 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:22:20.571020 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:22:20.571020 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:22:20.570634 unknown[1357]: wrote ssh authorized keys file for user: core Jan 17 00:22:20.583398 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:22:20.584246 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:22:20.852388 systemd-networkd[1165]: eth0: Gained IPv6LL Jan 17 00:22:21.039414 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 00:22:21.789166 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:22:21.790497 ignition[1357]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:22:21.790497 ignition[1357]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:22:21.790497 ignition[1357]: INFO : files: files passed Jan 17 00:22:21.790497 ignition[1357]: INFO : Ignition finished successfully Jan 17 00:22:21.793144 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:22:21.799691 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:22:21.801997 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:22:21.804721 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:22:21.804829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:22:21.818259 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:21.820050 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:21.820050 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:22:21.820396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:22:21.822471 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:22:21.829380 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:22:21.863458 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:22:21.863597 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:22:21.865352 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:22:21.866312 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:22:21.867161 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:22:21.872411 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:22:21.893974 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:22:21.899444 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:22:21.927894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:21.928631 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:21.929768 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:22:21.930683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:22:21.930867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:22:21.932098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:22:21.933125 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:22:21.933959 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:22:21.934786 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:22:21.935605 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:22:21.936413 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:22:21.937397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:22:21.938219 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:22:21.939429 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:22:21.940210 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:22:21.941053 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:22:21.941262 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:22:21.942360 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:21.943193 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:21.943892 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:22:21.944036 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:21.944912 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:22:21.945092 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:22:21.946447 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:22:21.946687 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:22:21.947418 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:22:21.947576 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:22:21.956549 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:22:21.957413 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:22:21.957625 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:21.961839 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:22:21.962595 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:22:21.962793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:21.965605 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:22:21.966319 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:22:21.976854 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:22:21.976995 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:22:21.986230 ignition[1410]: INFO : Ignition 2.19.0 Jan 17 00:22:21.986230 ignition[1410]: INFO : Stage: umount Jan 17 00:22:21.988052 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:22:21.988052 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:22:21.988052 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:22:21.989748 ignition[1410]: INFO : PUT result: OK Jan 17 00:22:21.994227 ignition[1410]: INFO : umount: umount passed Jan 17 00:22:21.994227 ignition[1410]: INFO : Ignition finished successfully Jan 17 00:22:21.994507 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:22:21.994673 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:22:21.995621 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:22:21.995695 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:22:21.996463 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:22:21.996527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:22:21.997397 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:22:21.997457 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:22:21.998752 systemd[1]: Stopped target network.target - Network. Jan 17 00:22:22.000268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:22:22.000341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:22:22.001174 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:22:22.001860 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:22:22.005264 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:22.006002 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:22:22.006636 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:22:22.008093 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:22:22.008157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:22:22.008954 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:22:22.009010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:22:22.009967 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:22:22.010045 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:22:22.010675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:22:22.010771 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:22:22.011603 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:22:22.014344 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:22:22.016923 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:22:22.017843 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:22:22.018016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:22:22.018238 systemd-networkd[1165]: eth0: DHCPv6 lease lost Jan 17 00:22:22.021609 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:22:22.022045 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:22:22.024111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:22:22.024162 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:22.029672 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:22:22.031562 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:22:22.031668 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:22:22.032825 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:22:22.032903 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:22.035788 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:22:22.035867 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:22.036501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:22:22.036566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:22.041060 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:22.064340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:22:22.064566 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:22.072208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:22:22.072307 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:22.073065 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:22:22.073114 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:22.073747 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:22:22.073819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:22:22.075412 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:22:22.075479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:22:22.076610 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:22:22.076962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:22:22.085483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:22:22.086925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:22:22.087015 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:22.087832 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:22:22.087903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:22.089257 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:22:22.089379 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:22:22.099131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:22:22.099480 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:22:22.148115 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:22:22.148294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:22:22.150499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:22:22.151943 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:22:22.152051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:22:22.160454 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:22:22.182434 systemd[1]: Switching root. Jan 17 00:22:22.221316 systemd-journald[179]: Journal stopped Jan 17 00:22:24.018276 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:22:24.018381 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:22:24.018402 kernel: SELinux: policy capability open_perms=1 Jan 17 00:22:24.018421 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:22:24.018441 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:22:24.018466 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:22:24.018490 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:22:24.018508 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:22:24.018527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:22:24.018546 kernel: audit: type=1403 audit(1768609342.685:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:22:24.018571 systemd[1]: Successfully loaded SELinux policy in 66.463ms. Jan 17 00:22:24.018597 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.722ms. Jan 17 00:22:24.018618 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:22:24.018638 systemd[1]: Detected virtualization amazon. Jan 17 00:22:24.018659 systemd[1]: Detected architecture x86-64. Jan 17 00:22:24.018684 systemd[1]: Detected first boot. Jan 17 00:22:24.018709 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:22:24.018730 zram_generator::config[1453]: No configuration found. Jan 17 00:22:24.018750 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:22:24.018770 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:22:24.018793 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:22:24.018813 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:22:24.018834 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:22:24.018854 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:22:24.018874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:22:24.018893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:22:24.018913 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:22:24.018932 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:22:24.018955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:22:24.018975 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:22:24.018996 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:22:24.019015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:22:24.019036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:22:24.019057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:22:24.019077 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:22:24.019096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:22:24.019116 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:22:24.019138 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:22:24.019157 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:22:24.019175 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:22:24.019224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:22:24.019247 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:22:24.019268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:22:24.019297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:22:24.019318 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:22:24.019343 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:22:24.019364 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:22:24.019385 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:22:24.019406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:22:24.019428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:22:24.019450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:22:24.019471 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:22:24.019493 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:22:24.019514 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:22:24.019540 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:22:24.019561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:24.019583 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:22:24.019604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:22:24.019625 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:22:24.019648 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:22:24.019670 systemd[1]: Reached target machines.target - Containers. Jan 17 00:22:24.019692 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:22:24.019717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:24.019735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:22:24.019754 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:22:24.019770 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:24.019789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:22:24.019811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:24.019829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:22:24.019849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:24.019868 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:22:24.019890 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:22:24.019911 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:22:24.019931 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:22:24.019953 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:22:24.019972 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:22:24.019994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:22:24.020014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:22:24.020034 kernel: loop: module loaded Jan 17 00:22:24.020060 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:22:24.020085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:22:24.020108 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:22:24.020128 systemd[1]: Stopped verity-setup.service. Jan 17 00:22:24.020148 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:24.020168 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:22:24.022232 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:22:24.022264 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:22:24.022286 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:22:24.022312 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:22:24.022333 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:22:24.022352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:22:24.022373 kernel: ACPI: bus type drm_connector registered Jan 17 00:22:24.022394 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:22:24.022418 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:22:24.022437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:24.022457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:24.022477 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:22:24.022496 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:22:24.022516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:24.022543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:24.022563 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:24.022583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:24.022603 kernel: fuse: init (API version 7.39) Jan 17 00:22:24.022622 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:22:24.022676 systemd-journald[1537]: Collecting audit messages is disabled. Jan 17 00:22:24.022721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:22:24.022745 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:22:24.022763 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:22:24.022781 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:22:24.022800 systemd-journald[1537]: Journal started Jan 17 00:22:24.022839 systemd-journald[1537]: Runtime Journal (/run/log/journal/ec21455657f5fd1702a6098aea7e765c) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:22:23.595997 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:22:24.024351 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:22:23.640335 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:22:23.640943 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:22:24.026723 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:22:24.045045 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:22:24.055230 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:22:24.069286 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:22:24.070509 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:22:24.070564 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:22:24.075087 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:22:24.088117 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:22:24.093678 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:22:24.095427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:24.103470 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:22:24.106200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:22:24.107334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:22:24.111399 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:22:24.112139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:22:24.114370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:24.118215 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:22:24.123422 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:22:24.129301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:22:24.130543 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:22:24.132207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:22:24.133177 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:22:24.144999 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:22:24.151354 systemd-journald[1537]: Time spent on flushing to /var/log/journal/ec21455657f5fd1702a6098aea7e765c is 75.069ms for 968 entries. Jan 17 00:22:24.151354 systemd-journald[1537]: System Journal (/var/log/journal/ec21455657f5fd1702a6098aea7e765c) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:22:24.268842 systemd-journald[1537]: Received client request to flush runtime journal. Jan 17 00:22:24.268930 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:22:24.165175 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:22:24.177346 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:22:24.192588 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:22:24.193775 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:22:24.263551 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:22:24.270537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:22:24.272438 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:22:24.279800 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:22:24.282125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:24.291536 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:22:24.340713 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 17 00:22:24.341162 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 17 00:22:24.354247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:22:24.372411 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:22:24.418101 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 00:22:24.542256 kernel: loop2: detected capacity change from 0 to 61336 Jan 17 00:22:24.659236 kernel: loop3: detected capacity change from 0 to 219144 Jan 17 00:22:24.719211 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:22:24.776950 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:22:24.821255 kernel: loop6: detected capacity change from 0 to 61336 Jan 17 00:22:24.855199 kernel: loop7: detected capacity change from 0 to 219144 Jan 17 00:22:24.889937 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:22:24.892425 (sd-merge)[1610]: Merged extensions into '/usr'. Jan 17 00:22:24.898346 systemd[1]: Reloading requested from client PID 1582 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:22:24.907403 systemd[1]: Reloading... Jan 17 00:22:24.982318 zram_generator::config[1632]: No configuration found. Jan 17 00:22:25.174808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:25.239049 systemd[1]: Reloading finished in 330 ms. Jan 17 00:22:25.267710 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:22:25.269530 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:22:25.280422 systemd[1]: Starting ensure-sysext.service... Jan 17 00:22:25.284375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:22:25.289463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:22:25.293353 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:22:25.293372 systemd[1]: Reloading... Jan 17 00:22:25.348443 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:22:25.352498 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:22:25.354912 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:22:25.359837 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Jan 17 00:22:25.366020 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jan 17 00:22:25.367875 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Jan 17 00:22:25.383254 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:22:25.383270 systemd-tmpfiles[1689]: Skipping /boot Jan 17 00:22:25.420427 zram_generator::config[1717]: No configuration found. Jan 17 00:22:25.418671 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:22:25.418681 systemd-tmpfiles[1689]: Skipping /boot Jan 17 00:22:25.629492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:25.668544 (udev-worker)[1774]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:25.782149 systemd[1]: Reloading finished in 488 ms. Jan 17 00:22:25.806796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:22:25.809209 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:22:25.820898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:22:25.836200 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1767) Jan 17 00:22:25.840151 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:22:25.855202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:22:25.855437 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:25.866441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:22:25.870224 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:22:25.875202 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:22:25.877426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:22:25.884403 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:22:25.890242 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:22:25.890464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:22:25.907471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:22:25.925423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:22:25.938779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:25.939573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:25.950981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:25.985541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:26.047032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:26.048619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:26.049260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:26.050613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:26.053218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:26.069849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:26.070040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:26.073069 ldconfig[1577]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:22:26.080984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:22:26.090116 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:22:26.096701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:26.096936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:26.122970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:22:26.127051 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:22:26.156190 systemd[1]: Finished ensure-sysext.service. Jan 17 00:22:26.159156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:22:26.175697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:26.176138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:22:26.181696 augenrules[1898]: No rules Jan 17 00:22:26.184611 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:22:26.196413 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:22:26.200809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:22:26.205649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:22:26.206454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:22:26.206916 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:22:26.216883 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:22:26.225684 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:22:26.244791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:22:26.245424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:22:26.245702 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:22:26.251279 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:26.260298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:22:26.260589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:22:26.262318 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:22:26.265452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:22:26.268397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:22:26.268834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:22:26.279843 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:22:26.282289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:22:26.299074 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:22:26.312812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:22:26.312912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:22:26.347974 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:22:26.357563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:22:26.358377 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:22:26.360105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:22:26.368222 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:22:26.397441 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:22:26.451309 lvm[1937]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:22:26.475377 systemd-networkd[1808]: lo: Link UP Jan 17 00:22:26.476219 systemd-networkd[1808]: lo: Gained carrier Jan 17 00:22:26.477458 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:22:26.478741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:22:26.484275 systemd-networkd[1808]: Enumeration completed Jan 17 00:22:26.484874 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:26.484880 systemd-networkd[1808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:22:26.490490 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:22:26.491118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:22:26.493061 systemd-networkd[1808]: eth0: Link UP Jan 17 00:22:26.493355 systemd-networkd[1808]: eth0: Gained carrier Jan 17 00:22:26.493385 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:22:26.503723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:22:26.511017 systemd-networkd[1808]: eth0: DHCPv4 address 172.31.25.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:22:26.516575 lvm[1944]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:22:26.519840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:22:26.523363 systemd-resolved[1812]: Positive Trust Anchors: Jan 17 00:22:26.523388 systemd-resolved[1812]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:22:26.523439 systemd-resolved[1812]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:22:26.546722 systemd-resolved[1812]: Defaulting to hostname 'linux'. Jan 17 00:22:26.559124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:22:26.560009 systemd[1]: Reached target network.target - Network. Jan 17 00:22:26.560606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:22:26.569040 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:22:26.569704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:22:26.570242 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:22:26.570872 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:22:26.571427 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:22:26.571838 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:22:26.572273 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:22:26.572311 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:22:26.572727 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:22:26.573641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:22:26.582357 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:22:26.588441 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:22:26.591398 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:22:26.592144 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:22:26.593344 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:22:26.593962 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:22:26.594477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:22:26.594525 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:22:26.599353 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:22:26.604434 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:22:26.610517 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:22:26.614330 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:22:26.620520 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:22:26.622353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:22:26.625839 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:22:26.635494 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:22:26.646311 jq[1954]: false Jan 17 00:22:26.646408 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:22:26.649634 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:22:26.668641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:22:26.685311 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:22:26.688766 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:22:26.689506 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:22:26.697401 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:22:26.703512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:22:26.712082 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:22:26.714259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:22:26.770231 jq[1968]: true Jan 17 00:22:26.775418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:22:26.775671 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:22:26.798577 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:22:26.800524 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:22:26.802337 dbus-daemon[1953]: [system] SELinux support is enabled Jan 17 00:22:26.802514 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:22:26.810741 (ntainerd)[1988]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:22:26.813740 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:22:26.815176 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:22:26.817437 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:22:26.817468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:22:26.829268 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:22:26.834054 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1808 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:22:26.863092 update_engine[1964]: I20260117 00:22:26.862982 1964 main.cc:92] Flatcar Update Engine starting Jan 17 00:22:26.864095 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:22:26.866892 extend-filesystems[1955]: Found loop4 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found loop5 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found loop6 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found loop7 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p1 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p2 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p3 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found usr Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p4 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p6 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p7 Jan 17 00:22:26.866892 extend-filesystems[1955]: Found nvme0n1p9 Jan 17 00:22:26.866892 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Jan 17 00:22:26.945687 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.874 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.881 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.887 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.896 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.905 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.905 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.910 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.911 INFO Fetch failed with 404: resource not found Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.917 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.919 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.921 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.921 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.922 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.922 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.923 INFO Fetch successful Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.923 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:22:26.945733 coreos-metadata[1952]: Jan 17 00:22:26.925 INFO Fetch successful Jan 17 00:22:26.871356 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:22:26.960587 jq[1985]: true Jan 17 00:22:26.960770 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Jan 17 00:22:26.969753 update_engine[1964]: I20260117 00:22:26.872160 1964 update_check_scheduler.cc:74] Next update check in 6m13s Jan 17 00:22:26.876604 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: ---------------------------------------------------- Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: available at https://www.nwtime.org/support Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: ---------------------------------------------------- Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: proto: precision = 0.077 usec (-24) Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: basedate set to 2026-01-04 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: gps base set to 2026-01-04 (week 2400) Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listen normally on 3 eth0 172.31.25.162:123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: bind(21) AF_INET6 fe80::4ce:acff:fea3:f631%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: unable to create socket on eth0 (5) for fe80::4ce:acff:fea3:f631%2#123 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: failed to init interface for address fe80::4ce:acff:fea3:f631%2 Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:22:26.970130 ntpd[1957]: 17 Jan 00:22:26 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:22:26.889565 systemd-logind[1963]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:22:26.978655 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:22:26.876644 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:22:26.889592 systemd-logind[1963]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 00:22:26.876655 ntpd[1957]: ---------------------------------------------------- Jan 17 00:22:26.890917 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:22:26.876665 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:22:26.895696 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:22:26.876675 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:22:26.903444 systemd-logind[1963]: New seat seat0. Jan 17 00:22:26.876685 ntpd[1957]: corporation. Support and training for ntp-4 are Jan 17 00:22:26.915328 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:22:26.876695 ntpd[1957]: available at https://www.nwtime.org/support Jan 17 00:22:26.876706 ntpd[1957]: ---------------------------------------------------- Jan 17 00:22:26.893581 ntpd[1957]: proto: precision = 0.077 usec (-24) Jan 17 00:22:26.901136 ntpd[1957]: basedate set to 2026-01-04 Jan 17 00:22:26.901159 ntpd[1957]: gps base set to 2026-01-04 (week 2400) Jan 17 00:22:26.910746 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:22:26.910796 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:22:26.910986 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:22:26.911025 ntpd[1957]: Listen normally on 3 eth0 172.31.25.162:123 Jan 17 00:22:26.911074 ntpd[1957]: Listen normally on 4 lo [::1]:123 Jan 17 00:22:26.911127 ntpd[1957]: bind(21) AF_INET6 fe80::4ce:acff:fea3:f631%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:22:26.911151 ntpd[1957]: unable to create socket on eth0 (5) for fe80::4ce:acff:fea3:f631%2#123 Jan 17 00:22:26.911168 ntpd[1957]: failed to init interface for address fe80::4ce:acff:fea3:f631%2 Jan 17 00:22:26.916008 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Jan 17 00:22:26.933572 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:22:26.933611 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:22:27.074252 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1779) Jan 17 00:22:27.074671 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:22:27.075736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:22:27.138128 bash[2029]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:22:27.142499 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:22:27.160279 systemd[1]: Starting sshkeys.service... Jan 17 00:22:27.179899 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:22:27.209852 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:22:27.210569 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:22:27.210569 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:22:27.210569 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:22:27.222339 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:22:27.224344 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:22:27.222721 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:22:27.227046 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:22:27.227314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:22:27.304117 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:22:27.304526 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:22:27.311347 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:22:27.333687 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:22:27.416252 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:22:27.428646 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:22:27.451543 polkitd[2105]: Started polkitd version 121 Jan 17 00:22:27.469827 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:22:27.470875 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:22:27.509697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:22:27.525192 coreos-metadata[2053]: Jan 17 00:22:27.523 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:22:27.527291 coreos-metadata[2053]: Jan 17 00:22:27.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:22:27.532162 coreos-metadata[2053]: Jan 17 00:22:27.531 INFO Fetch successful Jan 17 00:22:27.532162 coreos-metadata[2053]: Jan 17 00:22:27.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:22:27.534948 coreos-metadata[2053]: Jan 17 00:22:27.534 INFO Fetch successful Jan 17 00:22:27.534401 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:22:27.538243 unknown[2053]: wrote ssh authorized keys file for user: core Jan 17 00:22:27.542087 polkitd[2105]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:22:27.542196 polkitd[2105]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:22:27.560016 polkitd[2105]: Finished loading, compiling and executing 2 rules Jan 17 00:22:27.560903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:22:27.568597 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:22:27.573784 polkitd[2105]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:22:27.574901 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:22:27.580946 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:22:27.583583 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:22:27.586052 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:22:27.607288 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:22:27.608748 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:22:27.611746 systemd[1]: Finished sshkeys.service. Jan 17 00:22:27.622773 systemd-hostnamed[1993]: Hostname set to (transient) Jan 17 00:22:27.622922 systemd-resolved[1812]: System hostname changed to 'ip-172-31-25-162'. Jan 17 00:22:27.631025 containerd[1988]: time="2026-01-17T00:22:27.630940153Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:22:27.654981 containerd[1988]: time="2026-01-17T00:22:27.654878337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.656597 containerd[1988]: time="2026-01-17T00:22:27.656552771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:27.656597 containerd[1988]: time="2026-01-17T00:22:27.656590834Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:22:27.656733 containerd[1988]: time="2026-01-17T00:22:27.656627364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:22:27.656840 containerd[1988]: time="2026-01-17T00:22:27.656815694Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:22:27.656887 containerd[1988]: time="2026-01-17T00:22:27.656842149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.656940 containerd[1988]: time="2026-01-17T00:22:27.656918186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:27.656980 containerd[1988]: time="2026-01-17T00:22:27.656936752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657163 containerd[1988]: time="2026-01-17T00:22:27.657134128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657163 containerd[1988]: time="2026-01-17T00:22:27.657156899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657294 containerd[1988]: time="2026-01-17T00:22:27.657177624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657294 containerd[1988]: time="2026-01-17T00:22:27.657207633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657367 containerd[1988]: time="2026-01-17T00:22:27.657313864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657578 containerd[1988]: time="2026-01-17T00:22:27.657549544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657734 containerd[1988]: time="2026-01-17T00:22:27.657704681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:22:27.657734 containerd[1988]: time="2026-01-17T00:22:27.657729339Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:22:27.657862 containerd[1988]: time="2026-01-17T00:22:27.657840771Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:22:27.657925 containerd[1988]: time="2026-01-17T00:22:27.657903970Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:22:27.664044 containerd[1988]: time="2026-01-17T00:22:27.663985489Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:22:27.664187 containerd[1988]: time="2026-01-17T00:22:27.664056817Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:22:27.664187 containerd[1988]: time="2026-01-17T00:22:27.664081428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:22:27.664187 containerd[1988]: time="2026-01-17T00:22:27.664097204Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:22:27.664187 containerd[1988]: time="2026-01-17T00:22:27.664115389Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:22:27.664299 containerd[1988]: time="2026-01-17T00:22:27.664279008Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664579425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664910476Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664930270Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664942676Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664956361Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664969370Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.664985214Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665000536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665014270Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665027753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665039269Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665050172Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665074569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665471 containerd[1988]: time="2026-01-17T00:22:27.665101371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665113449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665126003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665140427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665153141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665167663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665202447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665215320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665229025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665240468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665256610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665268692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665284746Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665310590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665321733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.665841 containerd[1988]: time="2026-01-17T00:22:27.665334235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665388081Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665412463Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665433823Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665454152Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665463918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665475231Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665487744Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:22:27.666160 containerd[1988]: time="2026-01-17T00:22:27.665496851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:22:27.666347 containerd[1988]: time="2026-01-17T00:22:27.665777668Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:22:27.666347 containerd[1988]: time="2026-01-17T00:22:27.665828884Z" level=info msg="Connect containerd service" Jan 17 00:22:27.666347 containerd[1988]: time="2026-01-17T00:22:27.665868692Z" level=info msg="using legacy CRI server" Jan 17 00:22:27.666347 containerd[1988]: time="2026-01-17T00:22:27.665875267Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:22:27.666347 containerd[1988]: time="2026-01-17T00:22:27.665973321Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:22:27.666563 containerd[1988]: time="2026-01-17T00:22:27.666524801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666762274Z" level=info msg="Start subscribing containerd event" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666815984Z" level=info msg="Start recovering state" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666850191Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666875993Z" level=info msg="Start event monitor" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666892260Z" level=info msg="Start snapshots syncer" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666893961Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666904008Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.666959987Z" level=info msg="Start streaming server" Jan 17 00:22:27.668204 containerd[1988]: time="2026-01-17T00:22:27.667477864Z" level=info msg="containerd successfully booted in 0.037335s" Jan 17 00:22:27.668296 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:22:27.877157 ntpd[1957]: bind(24) AF_INET6 fe80::4ce:acff:fea3:f631%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:22:27.877234 ntpd[1957]: unable to create socket on eth0 (6) for fe80::4ce:acff:fea3:f631%2#123 Jan 17 00:22:27.877586 ntpd[1957]: 17 Jan 00:22:27 ntpd[1957]: bind(24) AF_INET6 fe80::4ce:acff:fea3:f631%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:22:27.877586 ntpd[1957]: 17 Jan 00:22:27 ntpd[1957]: unable to create socket on eth0 (6) for fe80::4ce:acff:fea3:f631%2#123 Jan 17 00:22:27.877586 ntpd[1957]: 17 Jan 00:22:27 ntpd[1957]: failed to init interface for address fe80::4ce:acff:fea3:f631%2 Jan 17 00:22:27.877248 ntpd[1957]: failed to init interface for address fe80::4ce:acff:fea3:f631%2 Jan 17 00:22:28.404393 systemd-networkd[1808]: eth0: Gained IPv6LL Jan 17 00:22:28.407130 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:22:28.408064 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:22:28.414623 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:22:28.418130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:28.425314 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:22:28.461451 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:22:28.491887 amazon-ssm-agent[2171]: Initializing new seelog logger Jan 17 00:22:28.492300 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Jan 17 00:22:28.492300 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.492300 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.492581 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 processing appconfig overrides Jan 17 00:22:28.492991 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.492991 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.493150 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 processing appconfig overrides Jan 17 00:22:28.493404 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.493404 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.493498 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 processing appconfig overrides Jan 17 00:22:28.493930 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO Proxy environment variables: Jan 17 00:22:28.495692 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.495692 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:22:28.495795 amazon-ssm-agent[2171]: 2026/01/17 00:22:28 processing appconfig overrides Jan 17 00:22:28.593657 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO http_proxy: Jan 17 00:22:28.692556 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO no_proxy: Jan 17 00:22:28.791067 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO https_proxy: Jan 17 00:22:28.889384 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:22:28.988038 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO Agent will take identity from EC2 Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [Registrar] Starting registrar module Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:28 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:29 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:29 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:29 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:22:29.054125 amazon-ssm-agent[2171]: 2026-01-17 00:22:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:22:29.087365 amazon-ssm-agent[2171]: 2026-01-17 00:22:29 INFO [CredentialRefresher] Next credential rotation will be in 30.374994710266666 minutes Jan 17 00:22:29.861730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:29.862652 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:22:29.863234 systemd[1]: Startup finished in 628ms (kernel) + 6.960s (initrd) + 7.241s (userspace) = 14.830s. Jan 17 00:22:29.872140 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:30.014667 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:22:30.017284 systemd[1]: Started sshd@0-172.31.25.162:22-4.153.228.146:33822.service - OpenSSH per-connection server daemon (4.153.228.146:33822). Jan 17 00:22:30.071530 amazon-ssm-agent[2171]: 2026-01-17 00:22:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:22:30.172152 amazon-ssm-agent[2171]: 2026-01-17 00:22:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2207) started Jan 17 00:22:30.272807 amazon-ssm-agent[2171]: 2026-01-17 00:22:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:22:30.580908 sshd[2200]: Accepted publickey for core from 4.153.228.146 port 33822 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:30.583495 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:30.598169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:22:30.606202 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:22:30.611448 systemd-logind[1963]: New session 1 of user core. Jan 17 00:22:30.626381 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:22:30.636977 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:22:30.644258 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:22:30.662734 kubelet[2194]: E0117 00:22:30.662699 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:30.665478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:30.665673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:30.666014 systemd[1]: kubelet.service: Consumed 1.051s CPU time. Jan 17 00:22:30.760373 systemd[2221]: Queued start job for default target default.target. Jan 17 00:22:30.770319 systemd[2221]: Created slice app.slice - User Application Slice. Jan 17 00:22:30.770360 systemd[2221]: Reached target paths.target - Paths. Jan 17 00:22:30.770376 systemd[2221]: Reached target timers.target - Timers. Jan 17 00:22:30.771701 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:22:30.792896 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:22:30.792987 systemd[2221]: Reached target sockets.target - Sockets. Jan 17 00:22:30.793008 systemd[2221]: Reached target basic.target - Basic System. Jan 17 00:22:30.793071 systemd[2221]: Reached target default.target - Main User Target. Jan 17 00:22:30.793114 systemd[2221]: Startup finished in 140ms. Jan 17 00:22:30.793502 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:22:30.801422 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:22:30.877158 ntpd[1957]: Listen normally on 7 eth0 [fe80::4ce:acff:fea3:f631%2]:123 Jan 17 00:22:30.877511 ntpd[1957]: 17 Jan 00:22:30 ntpd[1957]: Listen normally on 7 eth0 [fe80::4ce:acff:fea3:f631%2]:123 Jan 17 00:22:31.188975 systemd[1]: Started sshd@1-172.31.25.162:22-4.153.228.146:33830.service - OpenSSH per-connection server daemon (4.153.228.146:33830). Jan 17 00:22:31.718614 sshd[2234]: Accepted publickey for core from 4.153.228.146 port 33830 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:31.720069 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:31.725149 systemd-logind[1963]: New session 2 of user core. Jan 17 00:22:31.734438 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:22:32.103847 sshd[2234]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:32.106834 systemd[1]: sshd@1-172.31.25.162:22-4.153.228.146:33830.service: Deactivated successfully. Jan 17 00:22:32.108892 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:22:32.110297 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:22:32.111795 systemd-logind[1963]: Removed session 2. Jan 17 00:22:32.189575 systemd[1]: Started sshd@2-172.31.25.162:22-4.153.228.146:33842.service - OpenSSH per-connection server daemon (4.153.228.146:33842). Jan 17 00:22:32.687262 sshd[2241]: Accepted publickey for core from 4.153.228.146 port 33842 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:32.689760 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:32.694570 systemd-logind[1963]: New session 3 of user core. Jan 17 00:22:32.696400 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:22:33.039627 sshd[2241]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:33.046000 systemd[1]: sshd@2-172.31.25.162:22-4.153.228.146:33842.service: Deactivated successfully. Jan 17 00:22:33.047795 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:22:33.048682 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:22:33.049787 systemd-logind[1963]: Removed session 3. Jan 17 00:22:33.139210 systemd[1]: Started sshd@3-172.31.25.162:22-4.153.228.146:33850.service - OpenSSH per-connection server daemon (4.153.228.146:33850). Jan 17 00:22:33.667570 sshd[2248]: Accepted publickey for core from 4.153.228.146 port 33850 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:33.669096 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:33.674269 systemd-logind[1963]: New session 4 of user core. Jan 17 00:22:33.678377 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:22:34.440542 systemd-resolved[1812]: Clock change detected. Flushing caches. Jan 17 00:22:34.610067 sshd[2248]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:34.613748 systemd[1]: sshd@3-172.31.25.162:22-4.153.228.146:33850.service: Deactivated successfully. Jan 17 00:22:34.615803 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:22:34.617650 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:22:34.618802 systemd-logind[1963]: Removed session 4. Jan 17 00:22:34.708052 systemd[1]: Started sshd@4-172.31.25.162:22-4.153.228.146:50616.service - OpenSSH per-connection server daemon (4.153.228.146:50616). Jan 17 00:22:35.235815 sshd[2255]: Accepted publickey for core from 4.153.228.146 port 50616 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:35.237402 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:35.242612 systemd-logind[1963]: New session 5 of user core. Jan 17 00:22:35.248401 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:22:35.569110 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:22:35.569424 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:35.585182 sudo[2258]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:35.669850 sshd[2255]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:35.673066 systemd[1]: sshd@4-172.31.25.162:22-4.153.228.146:50616.service: Deactivated successfully. Jan 17 00:22:35.674820 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:22:35.676223 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:22:35.678098 systemd-logind[1963]: Removed session 5. Jan 17 00:22:35.770551 systemd[1]: Started sshd@5-172.31.25.162:22-4.153.228.146:50622.service - OpenSSH per-connection server daemon (4.153.228.146:50622). Jan 17 00:22:36.297799 sshd[2263]: Accepted publickey for core from 4.153.228.146 port 50622 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:36.299726 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:36.304180 systemd-logind[1963]: New session 6 of user core. Jan 17 00:22:36.310388 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:22:36.594759 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:22:36.595280 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:36.599422 sudo[2267]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:36.605225 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:22:36.605629 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:36.620532 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:36.622993 auditctl[2270]: No rules Jan 17 00:22:36.624267 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:22:36.624549 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:36.626630 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:22:36.658686 augenrules[2288]: No rules Jan 17 00:22:36.660328 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:22:36.662104 sudo[2266]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:36.746589 sshd[2263]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:36.749863 systemd[1]: sshd@5-172.31.25.162:22-4.153.228.146:50622.service: Deactivated successfully. Jan 17 00:22:36.752025 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:22:36.754093 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:22:36.755392 systemd-logind[1963]: Removed session 6. Jan 17 00:22:36.842675 systemd[1]: Started sshd@6-172.31.25.162:22-4.153.228.146:50630.service - OpenSSH per-connection server daemon (4.153.228.146:50630). Jan 17 00:22:37.368654 sshd[2296]: Accepted publickey for core from 4.153.228.146 port 50630 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:37.370277 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:37.374789 systemd-logind[1963]: New session 7 of user core. Jan 17 00:22:37.382666 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:22:37.664947 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:22:37.665261 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:22:38.519087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:38.519340 systemd[1]: kubelet.service: Consumed 1.051s CPU time. Jan 17 00:22:38.531615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:38.579376 systemd[1]: Reloading requested from client PID 2332 ('systemctl') (unit session-7.scope)... Jan 17 00:22:38.579657 systemd[1]: Reloading... Jan 17 00:22:38.681318 zram_generator::config[2368]: No configuration found. Jan 17 00:22:38.864209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:38.951764 systemd[1]: Reloading finished in 371 ms. Jan 17 00:22:38.998489 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:22:38.998562 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:22:38.998777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:39.014647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:39.261705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:39.273616 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:22:39.319789 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:22:39.319789 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:39.320236 kubelet[2435]: I0117 00:22:39.319911 2435 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:22:39.870846 kubelet[2435]: I0117 00:22:39.870802 2435 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:22:39.870846 kubelet[2435]: I0117 00:22:39.870834 2435 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:22:39.873534 kubelet[2435]: I0117 00:22:39.873490 2435 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:22:39.873534 kubelet[2435]: I0117 00:22:39.873527 2435 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:22:39.873822 kubelet[2435]: I0117 00:22:39.873776 2435 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:22:39.880518 kubelet[2435]: I0117 00:22:39.879881 2435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:22:39.884751 kubelet[2435]: E0117 00:22:39.884708 2435 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:22:39.884868 kubelet[2435]: I0117 00:22:39.884786 2435 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:22:39.887547 kubelet[2435]: I0117 00:22:39.887460 2435 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:22:39.887885 kubelet[2435]: I0117 00:22:39.887853 2435 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:22:39.888088 kubelet[2435]: I0117 00:22:39.887883 2435 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.25.162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:22:39.888239 kubelet[2435]: I0117 00:22:39.888093 2435 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:22:39.888239 kubelet[2435]: I0117 00:22:39.888107 2435 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:22:39.888323 kubelet[2435]: I0117 00:22:39.888242 2435 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:22:39.893798 kubelet[2435]: I0117 00:22:39.893770 2435 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:39.897157 kubelet[2435]: I0117 00:22:39.896839 2435 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:22:39.897157 kubelet[2435]: I0117 00:22:39.896868 2435 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:22:39.897157 kubelet[2435]: I0117 00:22:39.896904 2435 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:22:39.897157 kubelet[2435]: I0117 00:22:39.896926 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:22:39.897689 kubelet[2435]: E0117 00:22:39.897657 2435 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:39.897774 kubelet[2435]: E0117 00:22:39.897742 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:39.899790 kubelet[2435]: I0117 00:22:39.899722 2435 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:22:39.900305 kubelet[2435]: I0117 00:22:39.900283 2435 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:22:39.900386 kubelet[2435]: I0117 00:22:39.900323 2435 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:22:39.900386 kubelet[2435]: W0117 00:22:39.900379 2435 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:22:39.903327 kubelet[2435]: I0117 00:22:39.902977 2435 server.go:1262] "Started kubelet" Jan 17 00:22:39.903981 kubelet[2435]: I0117 00:22:39.903950 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:22:39.911241 kubelet[2435]: I0117 00:22:39.911107 2435 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:22:39.913157 kubelet[2435]: I0117 00:22:39.912870 2435 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:22:39.917087 kubelet[2435]: I0117 00:22:39.917038 2435 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:22:39.917325 kubelet[2435]: I0117 00:22:39.917308 2435 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:22:39.917618 kubelet[2435]: I0117 00:22:39.917590 2435 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:22:39.918058 kubelet[2435]: I0117 00:22:39.918037 2435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:22:39.923155 kubelet[2435]: I0117 00:22:39.921304 2435 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:22:39.923155 kubelet[2435]: E0117 00:22:39.921584 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:39.923155 kubelet[2435]: I0117 00:22:39.921817 2435 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:22:39.923155 kubelet[2435]: I0117 00:22:39.921878 2435 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:22:39.924294 kubelet[2435]: I0117 00:22:39.924274 2435 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:22:39.924548 kubelet[2435]: I0117 00:22:39.924521 2435 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:22:39.929944 kubelet[2435]: I0117 00:22:39.929922 2435 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:22:39.930350 kubelet[2435]: E0117 00:22:39.927353 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a37221d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.902941649 +0000 UTC m=+0.621924266,LastTimestamp:2026-01-17 00:22:39.902941649 +0000 UTC m=+0.621924266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:39.931936 kubelet[2435]: E0117 00:22:39.930826 2435 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:22:39.932052 kubelet[2435]: E0117 00:22:39.931021 2435 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:22:39.932519 kubelet[2435]: E0117 00:22:39.932469 2435 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.25.162\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 00:22:39.932789 kubelet[2435]: E0117 00:22:39.932752 2435 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:22:39.951723 kubelet[2435]: E0117 00:22:39.951607 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a51da3a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.930958761 +0000 UTC m=+0.649941442,LastTimestamp:2026-01-17 00:22:39.930958761 +0000 UTC m=+0.649941442,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:39.964077 kubelet[2435]: I0117 00:22:39.964043 2435 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:22:39.964077 kubelet[2435]: I0117 00:22:39.964063 2435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:22:39.964077 kubelet[2435]: I0117 00:22:39.964083 2435 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:39.966962 kubelet[2435]: I0117 00:22:39.966935 2435 policy_none.go:49] "None policy: Start" Jan 17 00:22:39.966962 kubelet[2435]: I0117 00:22:39.966962 2435 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:22:39.967093 kubelet[2435]: I0117 00:22:39.966974 2435 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:22:39.970844 kubelet[2435]: I0117 00:22:39.970242 2435 policy_none.go:47] "Start" Jan 17 00:22:39.975823 kubelet[2435]: E0117 00:22:39.975712 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6c9b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.25.162 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.96091845 +0000 UTC m=+0.679901051,LastTimestamp:2026-01-17 00:22:39.96091845 +0000 UTC m=+0.679901051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:39.979193 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:22:39.990172 kubelet[2435]: E0117 00:22:39.989921 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6ede1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.25.162 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.960927713 +0000 UTC m=+0.679910310,LastTimestamp:2026-01-17 00:22:39.960927713 +0000 UTC m=+0.679910310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:39.992389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:22:39.995868 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:22:40.001337 kubelet[2435]: E0117 00:22:40.000683 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6ff99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.25.162 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.960932249 +0000 UTC m=+0.679914846,LastTimestamp:2026-01-17 00:22:39.960932249 +0000 UTC m=+0.679914846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:40.002867 kubelet[2435]: E0117 00:22:40.002836 2435 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:22:40.004288 kubelet[2435]: I0117 00:22:40.004224 2435 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:22:40.004396 kubelet[2435]: I0117 00:22:40.004249 2435 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:22:40.005369 kubelet[2435]: I0117 00:22:40.005244 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:22:40.007829 kubelet[2435]: E0117 00:22:40.007747 2435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:22:40.007829 kubelet[2435]: E0117 00:22:40.007798 2435 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.162\" not found" Jan 17 00:22:40.025818 kubelet[2435]: E0117 00:22:40.025628 2435 event.go:359] "Server rejected event (will not retry!)" err="events \"172.31.25.162.188b5cd7a6e6c9b2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6c9b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.25.162 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.96091845 +0000 UTC m=+0.679901051,LastTimestamp:2026-01-17 00:22:39.972566049 +0000 UTC m=+0.691548663,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:40.041166 kubelet[2435]: E0117 00:22:40.039451 2435 event.go:359] "Server rejected event (will not retry!)" err="events \"172.31.25.162.188b5cd7a6e6ede1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6ede1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.25.162 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.960927713 +0000 UTC m=+0.679910310,LastTimestamp:2026-01-17 00:22:39.972578209 +0000 UTC m=+0.691560808,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:40.049200 kubelet[2435]: E0117 00:22:40.049060 2435 event.go:359] "Server rejected event (will not retry!)" err="events \"172.31.25.162.188b5cd7a6e6ff99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a6e6ff99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.25.162 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:39.960932249 +0000 UTC m=+0.679914846,LastTimestamp:2026-01-17 00:22:39.97258413 +0000 UTC m=+0.691566731,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:40.059256 kubelet[2435]: I0117 00:22:40.059199 2435 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:22:40.062310 kubelet[2435]: I0117 00:22:40.062274 2435 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:22:40.062310 kubelet[2435]: I0117 00:22:40.062301 2435 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:22:40.062310 kubelet[2435]: I0117 00:22:40.062331 2435 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:22:40.062576 kubelet[2435]: E0117 00:22:40.062459 2435 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 00:22:40.065732 kubelet[2435]: E0117 00:22:40.065416 2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.162.188b5cd7a9bef7e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.162,UID:172.31.25.162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:172.31.25.162,},FirstTimestamp:2026-01-17 00:22:40.008640483 +0000 UTC m=+0.727623083,LastTimestamp:2026-01-17 00:22:40.008640483 +0000 UTC m=+0.727623083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.162,}" Jan 17 00:22:40.105479 kubelet[2435]: I0117 00:22:40.105404 2435 kubelet_node_status.go:75] "Attempting to register node" node="172.31.25.162" Jan 17 00:22:40.124830 kubelet[2435]: I0117 00:22:40.124692 2435 kubelet_node_status.go:78] "Successfully registered node" node="172.31.25.162" Jan 17 00:22:40.124830 kubelet[2435]: E0117 00:22:40.124745 2435 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172.31.25.162\": node \"172.31.25.162\" not found" Jan 17 00:22:40.202894 kubelet[2435]: E0117 00:22:40.202854 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.285907 sudo[2299]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:40.303869 kubelet[2435]: E0117 00:22:40.303817 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.370084 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:40.373332 systemd[1]: sshd@6-172.31.25.162:22-4.153.228.146:50630.service: Deactivated successfully. Jan 17 00:22:40.374953 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:22:40.376303 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:22:40.377599 systemd-logind[1963]: Removed session 7. Jan 17 00:22:40.404381 kubelet[2435]: E0117 00:22:40.404214 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.505204 kubelet[2435]: E0117 00:22:40.505163 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.605868 kubelet[2435]: E0117 00:22:40.605811 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.706661 kubelet[2435]: E0117 00:22:40.706597 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.807420 kubelet[2435]: E0117 00:22:40.807371 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:40.875146 kubelet[2435]: I0117 00:22:40.875074 2435 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 00:22:40.875324 kubelet[2435]: I0117 00:22:40.875289 2435 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 17 00:22:40.898520 kubelet[2435]: E0117 00:22:40.898449 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:40.908351 kubelet[2435]: E0117 00:22:40.908167 2435 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.25.162\" not found" Jan 17 00:22:41.010694 kubelet[2435]: I0117 00:22:41.010179 2435 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 00:22:41.011138 containerd[1988]: time="2026-01-17T00:22:41.011093667Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:22:41.012200 kubelet[2435]: I0117 00:22:41.011363 2435 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 00:22:41.899121 kubelet[2435]: E0117 00:22:41.899066 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:41.899121 kubelet[2435]: I0117 00:22:41.899154 2435 apiserver.go:52] "Watching apiserver" Jan 17 00:22:41.914667 kubelet[2435]: E0117 00:22:41.914481 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:41.916326 systemd[1]: Created slice kubepods-besteffort-pod064887da_e822_4cfb_b60c_0540661c927a.slice - libcontainer container kubepods-besteffort-pod064887da_e822_4cfb_b60c_0540661c927a.slice. Jan 17 00:22:41.922298 kubelet[2435]: I0117 00:22:41.922272 2435 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:22:41.928978 systemd[1]: Created slice kubepods-besteffort-podd93decb1_47c7_4ea0_8301_9e37be9a64a5.slice - libcontainer container kubepods-besteffort-podd93decb1_47c7_4ea0_8301_9e37be9a64a5.slice. Jan 17 00:22:41.935805 kubelet[2435]: I0117 00:22:41.935769 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d93decb1-47c7-4ea0-8301-9e37be9a64a5-node-certs\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.935805 kubelet[2435]: I0117 00:22:41.935804 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-policysync\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.935983 kubelet[2435]: I0117 00:22:41.935842 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d93decb1-47c7-4ea0-8301-9e37be9a64a5-tigera-ca-bundle\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.935983 kubelet[2435]: I0117 00:22:41.935868 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4e6245a-3565-410a-9759-aa4637ef8b01-registration-dir\") pod \"csi-node-driver-xt5mp\" (UID: \"c4e6245a-3565-410a-9759-aa4637ef8b01\") " pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:41.935983 kubelet[2435]: I0117 00:22:41.935893 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4e6245a-3565-410a-9759-aa4637ef8b01-varrun\") pod \"csi-node-driver-xt5mp\" (UID: \"c4e6245a-3565-410a-9759-aa4637ef8b01\") " pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:41.935983 kubelet[2435]: I0117 00:22:41.935914 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/064887da-e822-4cfb-b60c-0540661c927a-kube-proxy\") pod \"kube-proxy-crbkh\" (UID: \"064887da-e822-4cfb-b60c-0540661c927a\") " pod="kube-system/kube-proxy-crbkh" Jan 17 00:22:41.935983 kubelet[2435]: I0117 00:22:41.935932 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/064887da-e822-4cfb-b60c-0540661c927a-lib-modules\") pod \"kube-proxy-crbkh\" (UID: \"064887da-e822-4cfb-b60c-0540661c927a\") " pod="kube-system/kube-proxy-crbkh" Jan 17 00:22:41.936118 kubelet[2435]: I0117 00:22:41.935952 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4e6245a-3565-410a-9759-aa4637ef8b01-socket-dir\") pod \"csi-node-driver-xt5mp\" (UID: \"c4e6245a-3565-410a-9759-aa4637ef8b01\") " pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:41.936118 kubelet[2435]: I0117 00:22:41.935974 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6h9\" (UniqueName: \"kubernetes.io/projected/c4e6245a-3565-410a-9759-aa4637ef8b01-kube-api-access-pl6h9\") pod \"csi-node-driver-xt5mp\" (UID: \"c4e6245a-3565-410a-9759-aa4637ef8b01\") " pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:41.936118 kubelet[2435]: I0117 00:22:41.935991 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/064887da-e822-4cfb-b60c-0540661c927a-xtables-lock\") pod \"kube-proxy-crbkh\" (UID: \"064887da-e822-4cfb-b60c-0540661c927a\") " pod="kube-system/kube-proxy-crbkh" Jan 17 00:22:41.936118 kubelet[2435]: I0117 00:22:41.936009 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszvq\" (UniqueName: \"kubernetes.io/projected/064887da-e822-4cfb-b60c-0540661c927a-kube-api-access-cszvq\") pod \"kube-proxy-crbkh\" (UID: \"064887da-e822-4cfb-b60c-0540661c927a\") " pod="kube-system/kube-proxy-crbkh" Jan 17 00:22:41.936118 kubelet[2435]: I0117 00:22:41.936025 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-lib-modules\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936256 kubelet[2435]: I0117 00:22:41.936051 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-var-lib-calico\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936256 kubelet[2435]: I0117 00:22:41.936072 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-var-run-calico\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936256 kubelet[2435]: I0117 00:22:41.936090 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzzd\" (UniqueName: \"kubernetes.io/projected/d93decb1-47c7-4ea0-8301-9e37be9a64a5-kube-api-access-7vzzd\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936256 kubelet[2435]: I0117 00:22:41.936105 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4e6245a-3565-410a-9759-aa4637ef8b01-kubelet-dir\") pod \"csi-node-driver-xt5mp\" (UID: \"c4e6245a-3565-410a-9759-aa4637ef8b01\") " pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:41.936256 kubelet[2435]: I0117 00:22:41.936121 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-cni-bin-dir\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936381 kubelet[2435]: I0117 00:22:41.936154 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-cni-log-dir\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936381 kubelet[2435]: I0117 00:22:41.936173 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-cni-net-dir\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936381 kubelet[2435]: I0117 00:22:41.936191 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-flexvol-driver-host\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:41.936381 kubelet[2435]: I0117 00:22:41.936209 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d93decb1-47c7-4ea0-8301-9e37be9a64a5-xtables-lock\") pod \"calico-node-xqtf8\" (UID: \"d93decb1-47c7-4ea0-8301-9e37be9a64a5\") " pod="calico-system/calico-node-xqtf8" Jan 17 00:22:42.040844 kubelet[2435]: E0117 00:22:42.040368 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.040844 kubelet[2435]: W0117 00:22:42.040396 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.040844 kubelet[2435]: E0117 00:22:42.040423 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.040844 kubelet[2435]: E0117 00:22:42.040812 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.041891 kubelet[2435]: W0117 00:22:42.041695 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.041891 kubelet[2435]: E0117 00:22:42.041728 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.042484 kubelet[2435]: E0117 00:22:42.042349 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.042484 kubelet[2435]: W0117 00:22:42.042379 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.042484 kubelet[2435]: E0117 00:22:42.042394 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043312 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047152 kubelet[2435]: W0117 00:22:42.043329 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043344 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043579 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047152 kubelet[2435]: W0117 00:22:42.043590 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043603 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043788 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047152 kubelet[2435]: W0117 00:22:42.043798 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.043810 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047152 kubelet[2435]: E0117 00:22:42.044037 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047675 kubelet[2435]: W0117 00:22:42.044046 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.044057 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.044988 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047675 kubelet[2435]: W0117 00:22:42.045000 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.045015 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.045306 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047675 kubelet[2435]: W0117 00:22:42.045316 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.045328 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.047675 kubelet[2435]: E0117 00:22:42.045942 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.047675 kubelet[2435]: W0117 00:22:42.045953 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.048111 kubelet[2435]: E0117 00:22:42.045966 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.052494 kubelet[2435]: E0117 00:22:42.049939 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.052494 kubelet[2435]: W0117 00:22:42.049955 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.052494 kubelet[2435]: E0117 00:22:42.049974 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.059154 kubelet[2435]: E0117 00:22:42.056971 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.059154 kubelet[2435]: W0117 00:22:42.056993 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.059154 kubelet[2435]: E0117 00:22:42.057016 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.059154 kubelet[2435]: E0117 00:22:42.057515 2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:42.059154 kubelet[2435]: W0117 00:22:42.057534 2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:42.059154 kubelet[2435]: E0117 00:22:42.057551 2435 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:42.228925 containerd[1988]: time="2026-01-17T00:22:42.228882734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crbkh,Uid:064887da-e822-4cfb-b60c-0540661c927a,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:42.235714 containerd[1988]: time="2026-01-17T00:22:42.235153549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqtf8,Uid:d93decb1-47c7-4ea0-8301-9e37be9a64a5,Namespace:calico-system,Attempt:0,}" Jan 17 00:22:42.786558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1842254655.mount: Deactivated successfully. Jan 17 00:22:42.803994 containerd[1988]: time="2026-01-17T00:22:42.803851061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:42.806320 containerd[1988]: time="2026-01-17T00:22:42.806175169Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:42.808115 containerd[1988]: time="2026-01-17T00:22:42.808033383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:22:42.810621 containerd[1988]: time="2026-01-17T00:22:42.810493715Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:22:42.812860 containerd[1988]: time="2026-01-17T00:22:42.812770078Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:42.815267 containerd[1988]: time="2026-01-17T00:22:42.815208669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:42.817695 containerd[1988]: time="2026-01-17T00:22:42.816258494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.294689ms" Jan 17 00:22:42.817695 containerd[1988]: time="2026-01-17T00:22:42.817257204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.013446ms" Jan 17 00:22:42.899298 kubelet[2435]: E0117 00:22:42.899241 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:43.050339 containerd[1988]: time="2026-01-17T00:22:43.048904869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:43.050339 containerd[1988]: time="2026-01-17T00:22:43.048979302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:43.050339 containerd[1988]: time="2026-01-17T00:22:43.049004654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:43.050339 containerd[1988]: time="2026-01-17T00:22:43.049104946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:43.064189 kubelet[2435]: E0117 00:22:43.063644 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:43.068786 containerd[1988]: time="2026-01-17T00:22:43.068533011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:43.068786 containerd[1988]: time="2026-01-17T00:22:43.068627683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:43.068786 containerd[1988]: time="2026-01-17T00:22:43.068650510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:43.069833 containerd[1988]: time="2026-01-17T00:22:43.069748566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:43.206396 systemd[1]: Started cri-containerd-ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9.scope - libcontainer container ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9. Jan 17 00:22:43.212256 systemd[1]: Started cri-containerd-761a8cc9e309266bb2e79e5cd0ac50925162a9885000c807b9461ab5d2b556e8.scope - libcontainer container 761a8cc9e309266bb2e79e5cd0ac50925162a9885000c807b9461ab5d2b556e8. Jan 17 00:22:43.252907 containerd[1988]: time="2026-01-17T00:22:43.252476252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqtf8,Uid:d93decb1-47c7-4ea0-8301-9e37be9a64a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\"" Jan 17 00:22:43.255726 containerd[1988]: time="2026-01-17T00:22:43.255684163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crbkh,Uid:064887da-e822-4cfb-b60c-0540661c927a,Namespace:kube-system,Attempt:0,} returns sandbox id \"761a8cc9e309266bb2e79e5cd0ac50925162a9885000c807b9461ab5d2b556e8\"" Jan 17 00:22:43.259185 containerd[1988]: time="2026-01-17T00:22:43.258912476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:22:43.899418 kubelet[2435]: E0117 00:22:43.899365 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:44.045144 systemd[1]: run-containerd-runc-k8s.io-ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9-runc.TAq3sv.mount: Deactivated successfully. Jan 17 00:22:44.377387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295189667.mount: Deactivated successfully. Jan 17 00:22:44.476766 containerd[1988]: time="2026-01-17T00:22:44.476702034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:44.477976 containerd[1988]: time="2026-01-17T00:22:44.477830615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 17 00:22:44.481407 containerd[1988]: time="2026-01-17T00:22:44.479297797Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:44.482801 containerd[1988]: time="2026-01-17T00:22:44.482757267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:44.483300 containerd[1988]: time="2026-01-17T00:22:44.483257846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.224305123s" Jan 17 00:22:44.483376 containerd[1988]: time="2026-01-17T00:22:44.483304780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:22:44.488945 containerd[1988]: time="2026-01-17T00:22:44.488895824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:22:44.492771 containerd[1988]: time="2026-01-17T00:22:44.492716960Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:22:44.515380 containerd[1988]: time="2026-01-17T00:22:44.515333407Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe\"" Jan 17 00:22:44.516406 containerd[1988]: time="2026-01-17T00:22:44.516368561Z" level=info msg="StartContainer for \"d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe\"" Jan 17 00:22:44.549359 systemd[1]: Started cri-containerd-d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe.scope - libcontainer container d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe. Jan 17 00:22:44.582510 containerd[1988]: time="2026-01-17T00:22:44.582352392Z" level=info msg="StartContainer for \"d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe\" returns successfully" Jan 17 00:22:44.594722 systemd[1]: cri-containerd-d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe.scope: Deactivated successfully. Jan 17 00:22:44.654105 containerd[1988]: time="2026-01-17T00:22:44.653916389Z" level=info msg="shim disconnected" id=d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe namespace=k8s.io Jan 17 00:22:44.654105 containerd[1988]: time="2026-01-17T00:22:44.653974565Z" level=warning msg="cleaning up after shim disconnected" id=d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe namespace=k8s.io Jan 17 00:22:44.654105 containerd[1988]: time="2026-01-17T00:22:44.653983608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:44.899968 kubelet[2435]: E0117 00:22:44.899927 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:45.049044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0f78d9866c62a4ff0a24fc551746d92d60c7cedfb5024bb9ae06669cd900afe-rootfs.mount: Deactivated successfully. Jan 17 00:22:45.063918 kubelet[2435]: E0117 00:22:45.063498 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:45.645164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692870160.mount: Deactivated successfully. Jan 17 00:22:45.900567 kubelet[2435]: E0117 00:22:45.900351 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:46.111030 containerd[1988]: time="2026-01-17T00:22:46.110973083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.116240 containerd[1988]: time="2026-01-17T00:22:46.116178889Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 17 00:22:46.139578 containerd[1988]: time="2026-01-17T00:22:46.139531656Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.143101 containerd[1988]: time="2026-01-17T00:22:46.143035494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:46.144002 containerd[1988]: time="2026-01-17T00:22:46.143836623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.654726116s" Jan 17 00:22:46.144002 containerd[1988]: time="2026-01-17T00:22:46.143887863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:22:46.146077 containerd[1988]: time="2026-01-17T00:22:46.145569039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:22:46.149878 containerd[1988]: time="2026-01-17T00:22:46.149828441Z" level=info msg="CreateContainer within sandbox \"761a8cc9e309266bb2e79e5cd0ac50925162a9885000c807b9461ab5d2b556e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:22:46.180034 containerd[1988]: time="2026-01-17T00:22:46.179214224Z" level=info msg="CreateContainer within sandbox \"761a8cc9e309266bb2e79e5cd0ac50925162a9885000c807b9461ab5d2b556e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c\"" Jan 17 00:22:46.180034 containerd[1988]: time="2026-01-17T00:22:46.179981889Z" level=info msg="StartContainer for \"eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c\"" Jan 17 00:22:46.212440 systemd[1]: run-containerd-runc-k8s.io-eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c-runc.jkkkfj.mount: Deactivated successfully. Jan 17 00:22:46.222395 systemd[1]: Started cri-containerd-eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c.scope - libcontainer container eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c. Jan 17 00:22:46.256410 containerd[1988]: time="2026-01-17T00:22:46.256356537Z" level=info msg="StartContainer for \"eefd657890f9278b13e7c5d3d38a6257c7c1f6fe0e3b8b91f8167568f50b891c\" returns successfully" Jan 17 00:22:46.900966 kubelet[2435]: E0117 00:22:46.900875 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:47.063141 kubelet[2435]: E0117 00:22:47.063097 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:47.901907 kubelet[2435]: E0117 00:22:47.901835 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:48.902441 kubelet[2435]: E0117 00:22:48.902336 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:49.063040 kubelet[2435]: E0117 00:22:49.062656 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:49.091803 containerd[1988]: time="2026-01-17T00:22:49.091740508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:49.093047 containerd[1988]: time="2026-01-17T00:22:49.092995798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:22:49.097152 containerd[1988]: time="2026-01-17T00:22:49.094826518Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:49.109783 containerd[1988]: time="2026-01-17T00:22:49.109722624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:49.110494 containerd[1988]: time="2026-01-17T00:22:49.110452377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.964852133s" Jan 17 00:22:49.110608 containerd[1988]: time="2026-01-17T00:22:49.110498932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:22:49.116209 containerd[1988]: time="2026-01-17T00:22:49.116165706Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:22:49.134642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133802350.mount: Deactivated successfully. Jan 17 00:22:49.138397 containerd[1988]: time="2026-01-17T00:22:49.138340088Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b\"" Jan 17 00:22:49.139157 containerd[1988]: time="2026-01-17T00:22:49.139096412Z" level=info msg="StartContainer for \"bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b\"" Jan 17 00:22:49.178340 systemd[1]: Started cri-containerd-bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b.scope - libcontainer container bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b. Jan 17 00:22:49.211497 containerd[1988]: time="2026-01-17T00:22:49.211242026Z" level=info msg="StartContainer for \"bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b\" returns successfully" Jan 17 00:22:49.903115 kubelet[2435]: E0117 00:22:49.903006 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:49.959301 systemd[1]: cri-containerd-bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b.scope: Deactivated successfully. Jan 17 00:22:49.973273 kubelet[2435]: I0117 00:22:49.972497 2435 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:22:49.984847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b-rootfs.mount: Deactivated successfully. Jan 17 00:22:50.155203 kubelet[2435]: I0117 00:22:50.154821 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crbkh" podStartSLOduration=7.269943723 podStartE2EDuration="10.154793117s" podCreationTimestamp="2026-01-17 00:22:40 +0000 UTC" firstStartedPulling="2026-01-17 00:22:43.260035583 +0000 UTC m=+3.979018180" lastFinishedPulling="2026-01-17 00:22:46.144884963 +0000 UTC m=+6.863867574" observedRunningTime="2026-01-17 00:22:47.130860037 +0000 UTC m=+7.849842663" watchObservedRunningTime="2026-01-17 00:22:50.154793117 +0000 UTC m=+10.873775716" Jan 17 00:22:50.792535 containerd[1988]: time="2026-01-17T00:22:50.792475063Z" level=info msg="shim disconnected" id=bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b namespace=k8s.io Jan 17 00:22:50.792535 containerd[1988]: time="2026-01-17T00:22:50.792522284Z" level=warning msg="cleaning up after shim disconnected" id=bed9e2155ba0d9ac8239275f562367a5aa1dd7d74ff1842910ae9d5f5d73e30b namespace=k8s.io Jan 17 00:22:50.792535 containerd[1988]: time="2026-01-17T00:22:50.792531160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:50.904101 kubelet[2435]: E0117 00:22:50.904057 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:51.068212 systemd[1]: Created slice kubepods-besteffort-podc4e6245a_3565_410a_9759_aa4637ef8b01.slice - libcontainer container kubepods-besteffort-podc4e6245a_3565_410a_9759_aa4637ef8b01.slice. Jan 17 00:22:51.073740 containerd[1988]: time="2026-01-17T00:22:51.073701782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt5mp,Uid:c4e6245a-3565-410a-9759-aa4637ef8b01,Namespace:calico-system,Attempt:0,}" Jan 17 00:22:51.133249 containerd[1988]: time="2026-01-17T00:22:51.132935803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:22:51.150768 containerd[1988]: time="2026-01-17T00:22:51.150710983Z" level=error msg="Failed to destroy network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:51.151727 containerd[1988]: time="2026-01-17T00:22:51.151534351Z" level=error msg="encountered an error cleaning up failed sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:51.151727 containerd[1988]: time="2026-01-17T00:22:51.151616725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt5mp,Uid:c4e6245a-3565-410a-9759-aa4637ef8b01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:51.152510 kubelet[2435]: E0117 00:22:51.152051 2435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:51.152510 kubelet[2435]: E0117 00:22:51.152146 2435 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:51.152510 kubelet[2435]: E0117 00:22:51.152176 2435 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xt5mp" Jan 17 00:22:51.152734 kubelet[2435]: E0117 00:22:51.152243 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:51.153005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1-shm.mount: Deactivated successfully. Jan 17 00:22:51.904535 kubelet[2435]: E0117 00:22:51.904475 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:52.135557 kubelet[2435]: I0117 00:22:52.134481 2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:22:52.140838 containerd[1988]: time="2026-01-17T00:22:52.140794954Z" level=info msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" Jan 17 00:22:52.141390 containerd[1988]: time="2026-01-17T00:22:52.141012539Z" level=info msg="Ensure that sandbox b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1 in task-service has been cleanup successfully" Jan 17 00:22:52.198807 containerd[1988]: time="2026-01-17T00:22:52.198747973Z" level=error msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" failed" error="failed to destroy network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:52.199074 kubelet[2435]: E0117 00:22:52.199035 2435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:22:52.199295 kubelet[2435]: E0117 00:22:52.199194 2435 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1"} Jan 17 00:22:52.199384 kubelet[2435]: E0117 00:22:52.199325 2435 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4e6245a-3565-410a-9759-aa4637ef8b01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:22:52.199493 kubelet[2435]: E0117 00:22:52.199364 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4e6245a-3565-410a-9759-aa4637ef8b01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:22:52.905447 kubelet[2435]: E0117 00:22:52.905354 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:53.907088 kubelet[2435]: E0117 00:22:53.907050 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:54.907758 kubelet[2435]: E0117 00:22:54.907715 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:55.321294 systemd[1]: Created slice kubepods-besteffort-pod2b3611d7_019c_41b2_aded_aea4ea617491.slice - libcontainer container kubepods-besteffort-pod2b3611d7_019c_41b2_aded_aea4ea617491.slice. Jan 17 00:22:55.332212 kubelet[2435]: I0117 00:22:55.331924 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqw6t\" (UniqueName: \"kubernetes.io/projected/2b3611d7-019c-41b2-aded-aea4ea617491-kube-api-access-bqw6t\") pod \"nginx-deployment-bb8f74bfb-69hsg\" (UID: \"2b3611d7-019c-41b2-aded-aea4ea617491\") " pod="default/nginx-deployment-bb8f74bfb-69hsg" Jan 17 00:22:55.631968 containerd[1988]: time="2026-01-17T00:22:55.631816612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-69hsg,Uid:2b3611d7-019c-41b2-aded-aea4ea617491,Namespace:default,Attempt:0,}" Jan 17 00:22:55.909186 kubelet[2435]: E0117 00:22:55.908624 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:55.927154 containerd[1988]: time="2026-01-17T00:22:55.926628535Z" level=error msg="Failed to destroy network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:55.929150 containerd[1988]: time="2026-01-17T00:22:55.928351722Z" level=error msg="encountered an error cleaning up failed sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:55.931706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97-shm.mount: Deactivated successfully. Jan 17 00:22:55.932745 containerd[1988]: time="2026-01-17T00:22:55.932696968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-69hsg,Uid:2b3611d7-019c-41b2-aded-aea4ea617491,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:55.933211 kubelet[2435]: E0117 00:22:55.932966 2435 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:55.934537 kubelet[2435]: E0117 00:22:55.933032 2435 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-69hsg" Jan 17 00:22:55.934537 kubelet[2435]: E0117 00:22:55.934200 2435 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-69hsg" Jan 17 00:22:55.934537 kubelet[2435]: E0117 00:22:55.934406 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-bb8f74bfb-69hsg_default(2b3611d7-019c-41b2-aded-aea4ea617491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-bb8f74bfb-69hsg_default(2b3611d7-019c-41b2-aded-aea4ea617491)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-69hsg" podUID="2b3611d7-019c-41b2-aded-aea4ea617491" Jan 17 00:22:56.144466 kubelet[2435]: I0117 00:22:56.144435 2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:22:56.145691 containerd[1988]: time="2026-01-17T00:22:56.145167352Z" level=info msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" Jan 17 00:22:56.145691 containerd[1988]: time="2026-01-17T00:22:56.145402255Z" level=info msg="Ensure that sandbox 27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97 in task-service has been cleanup successfully" Jan 17 00:22:56.243859 containerd[1988]: time="2026-01-17T00:22:56.243810170Z" level=error msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" failed" error="failed to destroy network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:22:56.244441 kubelet[2435]: E0117 00:22:56.244393 2435 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:22:56.244575 kubelet[2435]: E0117 00:22:56.244452 2435 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97"} Jan 17 00:22:56.244575 kubelet[2435]: E0117 00:22:56.244496 2435 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b3611d7-019c-41b2-aded-aea4ea617491\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:22:56.244575 kubelet[2435]: E0117 00:22:56.244533 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b3611d7-019c-41b2-aded-aea4ea617491\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-69hsg" podUID="2b3611d7-019c-41b2-aded-aea4ea617491" Jan 17 00:22:56.909258 kubelet[2435]: E0117 00:22:56.909208 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:57.009675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538381197.mount: Deactivated successfully. Jan 17 00:22:57.056096 containerd[1988]: time="2026-01-17T00:22:57.056036066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:57.057290 containerd[1988]: time="2026-01-17T00:22:57.057110168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:22:57.060028 containerd[1988]: time="2026-01-17T00:22:57.058846819Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:57.061563 containerd[1988]: time="2026-01-17T00:22:57.061413943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:57.062026 containerd[1988]: time="2026-01-17T00:22:57.061994672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.929016535s" Jan 17 00:22:57.062094 containerd[1988]: time="2026-01-17T00:22:57.062028523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:22:57.076315 containerd[1988]: time="2026-01-17T00:22:57.076276852Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:22:57.098449 containerd[1988]: time="2026-01-17T00:22:57.098384068Z" level=info msg="CreateContainer within sandbox \"ffdf5b3eb8e599c0082f70cecb472b3525fc86b286a35fc1a86936a2c632e6e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b68ea6817ae3abdcdf12c071ca8e01b6f884aeb69c359ce37fd297bb5037ffe8\"" Jan 17 00:22:57.099142 containerd[1988]: time="2026-01-17T00:22:57.099105089Z" level=info msg="StartContainer for \"b68ea6817ae3abdcdf12c071ca8e01b6f884aeb69c359ce37fd297bb5037ffe8\"" Jan 17 00:22:57.190926 systemd[1]: Started cri-containerd-b68ea6817ae3abdcdf12c071ca8e01b6f884aeb69c359ce37fd297bb5037ffe8.scope - libcontainer container b68ea6817ae3abdcdf12c071ca8e01b6f884aeb69c359ce37fd297bb5037ffe8. Jan 17 00:22:57.226576 containerd[1988]: time="2026-01-17T00:22:57.226527188Z" level=info msg="StartContainer for \"b68ea6817ae3abdcdf12c071ca8e01b6f884aeb69c359ce37fd297bb5037ffe8\" returns successfully" Jan 17 00:22:57.360888 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:22:57.361037 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:22:57.910288 kubelet[2435]: E0117 00:22:57.910211 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:58.169934 kubelet[2435]: I0117 00:22:58.169520 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xqtf8" podStartSLOduration=4.365109769 podStartE2EDuration="18.169501208s" podCreationTimestamp="2026-01-17 00:22:40 +0000 UTC" firstStartedPulling="2026-01-17 00:22:43.25852994 +0000 UTC m=+3.977512537" lastFinishedPulling="2026-01-17 00:22:57.062921379 +0000 UTC m=+17.781903976" observedRunningTime="2026-01-17 00:22:58.169300011 +0000 UTC m=+18.888282630" watchObservedRunningTime="2026-01-17 00:22:58.169501208 +0000 UTC m=+18.888483823" Jan 17 00:22:58.216286 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:22:58.911625 kubelet[2435]: E0117 00:22:58.911560 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:59.088254 kernel: bpftool[3176]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:22:59.156661 kubelet[2435]: I0117 00:22:59.156614 2435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:22:59.322675 (udev-worker)[3030]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:59.328815 systemd-networkd[1808]: vxlan.calico: Link UP Jan 17 00:22:59.328829 systemd-networkd[1808]: vxlan.calico: Gained carrier Jan 17 00:22:59.368918 (udev-worker)[3205]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:59.897989 kubelet[2435]: E0117 00:22:59.897948 2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:22:59.912688 kubelet[2435]: E0117 00:22:59.912613 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:00.648008 systemd-networkd[1808]: vxlan.calico: Gained IPv6LL Jan 17 00:23:00.913817 kubelet[2435]: E0117 00:23:00.913556 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:01.914462 kubelet[2435]: E0117 00:23:01.914411 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:02.915061 kubelet[2435]: E0117 00:23:02.914991 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:03.440327 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.33.64:123 Jan 17 00:23:03.440424 ntpd[1957]: Listen normally on 9 vxlan.calico [fe80::6463:a2ff:fe94:332d%3]:123 Jan 17 00:23:03.440869 ntpd[1957]: 17 Jan 00:23:03 ntpd[1957]: Listen normally on 8 vxlan.calico 192.168.33.64:123 Jan 17 00:23:03.440869 ntpd[1957]: 17 Jan 00:23:03 ntpd[1957]: Listen normally on 9 vxlan.calico [fe80::6463:a2ff:fe94:332d%3]:123 Jan 17 00:23:03.916354 kubelet[2435]: E0117 00:23:03.916116 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:04.916807 kubelet[2435]: E0117 00:23:04.916752 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:05.917260 kubelet[2435]: E0117 00:23:05.917210 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:06.065237 containerd[1988]: time="2026-01-17T00:23:06.064847758Z" level=info msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.417 [INFO][3264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.418 [INFO][3264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" iface="eth0" netns="/var/run/netns/cni-707da964-1694-021b-bfb9-288fd69f0d64" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.427 [INFO][3264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" iface="eth0" netns="/var/run/netns/cni-707da964-1694-021b-bfb9-288fd69f0d64" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.437 [INFO][3264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" iface="eth0" netns="/var/run/netns/cni-707da964-1694-021b-bfb9-288fd69f0d64" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.437 [INFO][3264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.437 [INFO][3264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.633 [INFO][3272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.634 [INFO][3272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.634 [INFO][3272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.644 [WARNING][3272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.644 [INFO][3272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.646 [INFO][3272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:06.650773 containerd[1988]: 2026-01-17 00:23:06.649 [INFO][3264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:06.653243 containerd[1988]: time="2026-01-17T00:23:06.653194170Z" level=info msg="TearDown network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" successfully" Jan 17 00:23:06.653243 containerd[1988]: time="2026-01-17T00:23:06.653236524Z" level=info msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" returns successfully" Jan 17 00:23:06.654614 systemd[1]: run-netns-cni\x2d707da964\x2d1694\x2d021b\x2dbfb9\x2d288fd69f0d64.mount: Deactivated successfully. Jan 17 00:23:06.657521 containerd[1988]: time="2026-01-17T00:23:06.657487725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt5mp,Uid:c4e6245a-3565-410a-9759-aa4637ef8b01,Namespace:calico-system,Attempt:1,}" Jan 17 00:23:06.846150 systemd-networkd[1808]: calie82798c9c86: Link UP Jan 17 00:23:06.847017 (udev-worker)[3297]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:06.848346 systemd-networkd[1808]: calie82798c9c86: Gained carrier Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.731 [INFO][3278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.162-k8s-csi--node--driver--xt5mp-eth0 csi-node-driver- calico-system c4e6245a-3565-410a-9759-aa4637ef8b01 1227 0 2026-01-17 00:22:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.25.162 csi-node-driver-xt5mp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie82798c9c86 [] [] }} ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.731 [INFO][3278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.778 [INFO][3290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" HandleID="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.778 [INFO][3290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" HandleID="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.25.162", "pod":"csi-node-driver-xt5mp", "timestamp":"2026-01-17 00:23:06.778408062 +0000 UTC"}, Hostname:"172.31.25.162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.778 [INFO][3290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.778 [INFO][3290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.778 [INFO][3290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.162' Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.787 [INFO][3290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.811 [INFO][3290] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.817 [INFO][3290] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.820 [INFO][3290] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.823 [INFO][3290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.823 [INFO][3290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.825 [INFO][3290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698 Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.830 [INFO][3290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.835 [INFO][3290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.65/26] block=192.168.33.64/26 handle="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.835 [INFO][3290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.65/26] handle="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" host="172.31.25.162" Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.835 [INFO][3290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:06.877621 containerd[1988]: 2026-01-17 00:23:06.835 [INFO][3290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.65/26] IPv6=[] ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" HandleID="k8s-pod-network.48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.839 [INFO][3278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-csi--node--driver--xt5mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e6245a-3565-410a-9759-aa4637ef8b01", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"", Pod:"csi-node-driver-xt5mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie82798c9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.839 [INFO][3278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.65/32] ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.839 [INFO][3278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie82798c9c86 ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.850 [INFO][3278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.850 [INFO][3278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-csi--node--driver--xt5mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e6245a-3565-410a-9759-aa4637ef8b01", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698", Pod:"csi-node-driver-xt5mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie82798c9c86", MAC:"3e:12:92:da:f1:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:06.878628 containerd[1988]: 2026-01-17 00:23:06.871 [INFO][3278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698" Namespace="calico-system" Pod="csi-node-driver-xt5mp" WorkloadEndpoint="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:06.907059 containerd[1988]: time="2026-01-17T00:23:06.906482887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:06.907059 containerd[1988]: time="2026-01-17T00:23:06.906553688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:06.907059 containerd[1988]: time="2026-01-17T00:23:06.906568964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:06.907059 containerd[1988]: time="2026-01-17T00:23:06.906660250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:06.917956 kubelet[2435]: E0117 00:23:06.917880 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:06.942419 systemd[1]: Started cri-containerd-48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698.scope - libcontainer container 48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698. Jan 17 00:23:06.973296 containerd[1988]: time="2026-01-17T00:23:06.973255210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xt5mp,Uid:c4e6245a-3565-410a-9759-aa4637ef8b01,Namespace:calico-system,Attempt:1,} returns sandbox id \"48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698\"" Jan 17 00:23:06.976160 containerd[1988]: time="2026-01-17T00:23:06.976113195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:23:07.064472 containerd[1988]: time="2026-01-17T00:23:07.064429799Z" level=info msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.130 [INFO][3362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.130 [INFO][3362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" iface="eth0" netns="/var/run/netns/cni-9ba853a6-e851-f9e1-98a6-55f077e6998e" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.131 [INFO][3362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" iface="eth0" netns="/var/run/netns/cni-9ba853a6-e851-f9e1-98a6-55f077e6998e" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.131 [INFO][3362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" iface="eth0" netns="/var/run/netns/cni-9ba853a6-e851-f9e1-98a6-55f077e6998e" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.131 [INFO][3362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.131 [INFO][3362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.176 [INFO][3369] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.176 [INFO][3369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.176 [INFO][3369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.186 [WARNING][3369] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.186 [INFO][3369] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.189 [INFO][3369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:07.193819 containerd[1988]: 2026-01-17 00:23:07.192 [INFO][3362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:07.194783 containerd[1988]: time="2026-01-17T00:23:07.194198350Z" level=info msg="TearDown network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" successfully" Jan 17 00:23:07.194783 containerd[1988]: time="2026-01-17T00:23:07.194235625Z" level=info msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" returns successfully" Jan 17 00:23:07.198481 containerd[1988]: time="2026-01-17T00:23:07.198414256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-69hsg,Uid:2b3611d7-019c-41b2-aded-aea4ea617491,Namespace:default,Attempt:1,}" Jan 17 00:23:07.232068 containerd[1988]: time="2026-01-17T00:23:07.230317318Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:07.232068 containerd[1988]: time="2026-01-17T00:23:07.231869756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:23:07.232068 containerd[1988]: time="2026-01-17T00:23:07.232005391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:23:07.232570 kubelet[2435]: E0117 00:23:07.232523 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:07.232686 kubelet[2435]: E0117 00:23:07.232587 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:07.232734 kubelet[2435]: E0117 00:23:07.232697 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:07.235944 containerd[1988]: time="2026-01-17T00:23:07.235901840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:23:07.362810 (udev-worker)[3299]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:07.363704 systemd-networkd[1808]: calic36d82aafa3: Link UP Jan 17 00:23:07.365355 systemd-networkd[1808]: calic36d82aafa3: Gained carrier Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.263 [INFO][3375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0 nginx-deployment-bb8f74bfb- default 2b3611d7-019c-41b2-aded-aea4ea617491 1239 0 2026-01-17 00:22:55 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.162 nginx-deployment-bb8f74bfb-69hsg eth0 default [] [] [kns.default ksa.default.default] calic36d82aafa3 [] [] }} ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.264 [INFO][3375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.302 [INFO][3389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" HandleID="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.302 [INFO][3389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" HandleID="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.162", "pod":"nginx-deployment-bb8f74bfb-69hsg", "timestamp":"2026-01-17 00:23:07.302239296 +0000 UTC"}, Hostname:"172.31.25.162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.302 [INFO][3389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.302 [INFO][3389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.302 [INFO][3389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.162' Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.318 [INFO][3389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.330 [INFO][3389] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.336 [INFO][3389] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.338 [INFO][3389] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.341 [INFO][3389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.341 [INFO][3389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.344 [INFO][3389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0 Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.350 [INFO][3389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.357 [INFO][3389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.66/26] block=192.168.33.64/26 handle="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.357 [INFO][3389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.66/26] handle="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" host="172.31.25.162" Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.357 [INFO][3389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:07.383806 containerd[1988]: 2026-01-17 00:23:07.357 [INFO][3389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.66/26] IPv6=[] ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" HandleID="k8s-pod-network.bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.360 [INFO][3375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2b3611d7-019c-41b2-aded-aea4ea617491", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-69hsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic36d82aafa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.360 [INFO][3375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.66/32] ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.360 [INFO][3375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic36d82aafa3 ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.367 [INFO][3375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.370 [INFO][3375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2b3611d7-019c-41b2-aded-aea4ea617491", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0", Pod:"nginx-deployment-bb8f74bfb-69hsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic36d82aafa3", MAC:"ee:0d:ce:73:91:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:07.384823 containerd[1988]: 2026-01-17 00:23:07.380 [INFO][3375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0" Namespace="default" Pod="nginx-deployment-bb8f74bfb-69hsg" WorkloadEndpoint="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:07.413686 containerd[1988]: time="2026-01-17T00:23:07.413337543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:07.413686 containerd[1988]: time="2026-01-17T00:23:07.413411971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:07.413686 containerd[1988]: time="2026-01-17T00:23:07.413436731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:07.413686 containerd[1988]: time="2026-01-17T00:23:07.413543440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:07.434766 systemd[1]: Started cri-containerd-bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0.scope - libcontainer container bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0. Jan 17 00:23:07.485040 containerd[1988]: time="2026-01-17T00:23:07.485001059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-69hsg,Uid:2b3611d7-019c-41b2-aded-aea4ea617491,Namespace:default,Attempt:1,} returns sandbox id \"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0\"" Jan 17 00:23:07.495745 containerd[1988]: time="2026-01-17T00:23:07.495694747Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:07.497224 containerd[1988]: time="2026-01-17T00:23:07.497166530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:23:07.497403 containerd[1988]: time="2026-01-17T00:23:07.497265625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:23:07.497517 kubelet[2435]: E0117 00:23:07.497462 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:07.497617 kubelet[2435]: E0117 00:23:07.497521 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:07.497760 kubelet[2435]: E0117 00:23:07.497720 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:07.497886 kubelet[2435]: E0117 00:23:07.497778 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:23:07.498306 containerd[1988]: time="2026-01-17T00:23:07.498276801Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:23:07.657319 systemd[1]: run-containerd-runc-k8s.io-48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698-runc.DyanXv.mount: Deactivated successfully. Jan 17 00:23:07.657460 systemd[1]: run-netns-cni\x2d9ba853a6\x2de851\x2df9e1\x2d98a6\x2d55f077e6998e.mount: Deactivated successfully. Jan 17 00:23:07.918731 kubelet[2435]: E0117 00:23:07.918600 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:08.137539 systemd-networkd[1808]: calie82798c9c86: Gained IPv6LL Jan 17 00:23:08.193058 kubelet[2435]: E0117 00:23:08.193005 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:23:08.918787 kubelet[2435]: E0117 00:23:08.918708 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:09.224516 systemd-networkd[1808]: calic36d82aafa3: Gained IPv6LL Jan 17 00:23:09.919261 kubelet[2435]: E0117 00:23:09.919209 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:09.987911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656246241.mount: Deactivated successfully. Jan 17 00:23:10.663042 kubelet[2435]: I0117 00:23:10.663009 2435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:23:10.920110 kubelet[2435]: E0117 00:23:10.919677 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:11.270089 containerd[1988]: time="2026-01-17T00:23:11.270022684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:11.272177 containerd[1988]: time="2026-01-17T00:23:11.272082747Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63840319" Jan 17 00:23:11.277150 containerd[1988]: time="2026-01-17T00:23:11.275905240Z" level=info msg="ImageCreate event name:\"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:11.281442 containerd[1988]: time="2026-01-17T00:23:11.281365602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:11.283661 containerd[1988]: time="2026-01-17T00:23:11.283616496Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 3.785307775s" Jan 17 00:23:11.283661 containerd[1988]: time="2026-01-17T00:23:11.283659344Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:23:11.292682 containerd[1988]: time="2026-01-17T00:23:11.292438293Z" level=info msg="CreateContainer within sandbox \"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 00:23:11.319478 containerd[1988]: time="2026-01-17T00:23:11.319428631Z" level=info msg="CreateContainer within sandbox \"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6bf91f4fe86645c14d9ed94f583625fe1d17154b2374d2530c14b8ad52ee0d62\"" Jan 17 00:23:11.320976 containerd[1988]: time="2026-01-17T00:23:11.320107409Z" level=info msg="StartContainer for \"6bf91f4fe86645c14d9ed94f583625fe1d17154b2374d2530c14b8ad52ee0d62\"" Jan 17 00:23:11.361402 systemd[1]: Started cri-containerd-6bf91f4fe86645c14d9ed94f583625fe1d17154b2374d2530c14b8ad52ee0d62.scope - libcontainer container 6bf91f4fe86645c14d9ed94f583625fe1d17154b2374d2530c14b8ad52ee0d62. Jan 17 00:23:11.392698 containerd[1988]: time="2026-01-17T00:23:11.392656944Z" level=info msg="StartContainer for \"6bf91f4fe86645c14d9ed94f583625fe1d17154b2374d2530c14b8ad52ee0d62\" returns successfully" Jan 17 00:23:11.440065 ntpd[1957]: Listen normally on 10 calie82798c9c86 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:23:11.440653 ntpd[1957]: 17 Jan 00:23:11 ntpd[1957]: Listen normally on 10 calie82798c9c86 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:23:11.440653 ntpd[1957]: 17 Jan 00:23:11 ntpd[1957]: Listen normally on 11 calic36d82aafa3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:23:11.440170 ntpd[1957]: Listen normally on 11 calic36d82aafa3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:23:11.920758 kubelet[2435]: E0117 00:23:11.920663 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:12.598038 update_engine[1964]: I20260117 00:23:12.597939 1964 update_attempter.cc:509] Updating boot flags... Jan 17 00:23:12.657308 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3590) Jan 17 00:23:12.852158 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3593) Jan 17 00:23:12.921651 kubelet[2435]: E0117 00:23:12.921594 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:13.922672 kubelet[2435]: E0117 00:23:13.922605 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:14.923839 kubelet[2435]: E0117 00:23:14.923781 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:15.924318 kubelet[2435]: E0117 00:23:15.924259 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:16.924444 kubelet[2435]: E0117 00:23:16.924402 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:17.925493 kubelet[2435]: E0117 00:23:17.925421 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:18.926368 kubelet[2435]: E0117 00:23:18.926319 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:19.065087 containerd[1988]: time="2026-01-17T00:23:19.064848893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:23:19.078398 kubelet[2435]: I0117 00:23:19.078327 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-69hsg" podStartSLOduration=20.278071401 podStartE2EDuration="24.078311516s" podCreationTimestamp="2026-01-17 00:22:55 +0000 UTC" firstStartedPulling="2026-01-17 00:23:07.486094993 +0000 UTC m=+28.205077590" lastFinishedPulling="2026-01-17 00:23:11.286335107 +0000 UTC m=+32.005317705" observedRunningTime="2026-01-17 00:23:12.231030377 +0000 UTC m=+32.950013000" watchObservedRunningTime="2026-01-17 00:23:19.078311516 +0000 UTC m=+39.797294134" Jan 17 00:23:19.346408 containerd[1988]: time="2026-01-17T00:23:19.346282701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:19.348559 containerd[1988]: time="2026-01-17T00:23:19.348448439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:23:19.348559 containerd[1988]: time="2026-01-17T00:23:19.348514299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:23:19.348730 kubelet[2435]: E0117 00:23:19.348695 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:19.348787 kubelet[2435]: E0117 00:23:19.348742 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:19.348849 kubelet[2435]: E0117 00:23:19.348811 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:19.350196 containerd[1988]: time="2026-01-17T00:23:19.349950836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:23:19.607763 containerd[1988]: time="2026-01-17T00:23:19.607640022Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:19.609930 containerd[1988]: time="2026-01-17T00:23:19.609808793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:23:19.609930 containerd[1988]: time="2026-01-17T00:23:19.609817708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:23:19.610117 kubelet[2435]: E0117 00:23:19.610042 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:19.610117 kubelet[2435]: E0117 00:23:19.610095 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:19.610250 kubelet[2435]: E0117 00:23:19.610207 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:19.610336 kubelet[2435]: E0117 00:23:19.610265 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:23:19.897400 kubelet[2435]: E0117 00:23:19.897275 2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:19.927819 kubelet[2435]: E0117 00:23:19.927117 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:20.928690 kubelet[2435]: E0117 00:23:20.928560 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:21.582412 systemd[1]: Created slice kubepods-besteffort-pod2b9eb21d_20cc_431a_b282_fb76d61888f8.slice - libcontainer container kubepods-besteffort-pod2b9eb21d_20cc_431a_b282_fb76d61888f8.slice. Jan 17 00:23:21.623933 kubelet[2435]: I0117 00:23:21.623851 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2b9eb21d-20cc-431a-b282-fb76d61888f8-data\") pod \"nfs-server-provisioner-0\" (UID: \"2b9eb21d-20cc-431a-b282-fb76d61888f8\") " pod="default/nfs-server-provisioner-0" Jan 17 00:23:21.623933 kubelet[2435]: I0117 00:23:21.623892 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swg52\" (UniqueName: \"kubernetes.io/projected/2b9eb21d-20cc-431a-b282-fb76d61888f8-kube-api-access-swg52\") pod \"nfs-server-provisioner-0\" (UID: \"2b9eb21d-20cc-431a-b282-fb76d61888f8\") " pod="default/nfs-server-provisioner-0" Jan 17 00:23:21.890330 containerd[1988]: time="2026-01-17T00:23:21.890196888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2b9eb21d-20cc-431a-b282-fb76d61888f8,Namespace:default,Attempt:0,}" Jan 17 00:23:21.930186 kubelet[2435]: E0117 00:23:21.929233 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:22.092398 systemd-networkd[1808]: cali60e51b789ff: Link UP Jan 17 00:23:22.096354 systemd-networkd[1808]: cali60e51b789ff: Gained carrier Jan 17 00:23:22.097926 (udev-worker)[3791]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:21.958 [INFO][3773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.162-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 2b9eb21d-20cc-431a-b282-fb76d61888f8 1349 0 2026-01-17 00:23:21 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.25.162 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:21.958 [INFO][3773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.024 [INFO][3784] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" HandleID="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Workload="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.024 [INFO][3784] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" HandleID="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Workload="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.162", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-17 00:23:22.024098312 +0000 UTC"}, Hostname:"172.31.25.162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.024 [INFO][3784] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.024 [INFO][3784] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.024 [INFO][3784] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.162' Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.036 [INFO][3784] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.042 [INFO][3784] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.051 [INFO][3784] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.054 [INFO][3784] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.058 [INFO][3784] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.058 [INFO][3784] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.061 [INFO][3784] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.067 [INFO][3784] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.085 [INFO][3784] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.67/26] block=192.168.33.64/26 handle="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.085 [INFO][3784] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.67/26] handle="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" host="172.31.25.162" Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.085 [INFO][3784] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:22.114943 containerd[1988]: 2026-01-17 00:23:22.085 [INFO][3784] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.67/26] IPv6=[] ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" HandleID="k8s-pod-network.8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Workload="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.118889 containerd[1988]: 2026-01-17 00:23:22.088 [INFO][3773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2b9eb21d-20cc-431a-b282-fb76d61888f8", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:22.118889 containerd[1988]: 2026-01-17 00:23:22.088 [INFO][3773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.67/32] ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.118889 containerd[1988]: 2026-01-17 00:23:22.088 [INFO][3773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.118889 containerd[1988]: 2026-01-17 00:23:22.094 [INFO][3773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.120719 containerd[1988]: 2026-01-17 00:23:22.094 [INFO][3773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2b9eb21d-20cc-431a-b282-fb76d61888f8", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.33.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0a:bb:5d:71:38:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:22.120719 containerd[1988]: 2026-01-17 00:23:22.108 [INFO][3773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.162-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:23:22.151562 containerd[1988]: time="2026-01-17T00:23:22.150944110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:22.151562 containerd[1988]: time="2026-01-17T00:23:22.151021483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:22.151562 containerd[1988]: time="2026-01-17T00:23:22.151047466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:22.151562 containerd[1988]: time="2026-01-17T00:23:22.151224715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:22.188430 systemd[1]: Started cri-containerd-8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d.scope - libcontainer container 8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d. Jan 17 00:23:22.235446 containerd[1988]: time="2026-01-17T00:23:22.235350921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2b9eb21d-20cc-431a-b282-fb76d61888f8,Namespace:default,Attempt:0,} returns sandbox id \"8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d\"" Jan 17 00:23:22.237995 containerd[1988]: time="2026-01-17T00:23:22.237786860Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:23:22.930511 kubelet[2435]: E0117 00:23:22.930293 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:23.497550 systemd-networkd[1808]: cali60e51b789ff: Gained IPv6LL Jan 17 00:23:23.931781 kubelet[2435]: E0117 00:23:23.931428 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:24.806181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205610317.mount: Deactivated successfully. Jan 17 00:23:24.931593 kubelet[2435]: E0117 00:23:24.931550 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:25.932893 kubelet[2435]: E0117 00:23:25.932812 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:26.440117 ntpd[1957]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:23:26.441476 ntpd[1957]: 17 Jan 00:23:26 ntpd[1957]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:23:26.933199 kubelet[2435]: E0117 00:23:26.933118 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:26.964944 containerd[1988]: time="2026-01-17T00:23:26.964887299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:26.968219 containerd[1988]: time="2026-01-17T00:23:26.968158401Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 00:23:26.970815 containerd[1988]: time="2026-01-17T00:23:26.970759771Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:26.976345 containerd[1988]: time="2026-01-17T00:23:26.975307510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:26.976345 containerd[1988]: time="2026-01-17T00:23:26.976221437Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.73839965s" Jan 17 00:23:26.976345 containerd[1988]: time="2026-01-17T00:23:26.976256284Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 00:23:27.082713 containerd[1988]: time="2026-01-17T00:23:27.082649053Z" level=info msg="CreateContainer within sandbox \"8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:23:27.120145 containerd[1988]: time="2026-01-17T00:23:27.120082686Z" level=info msg="CreateContainer within sandbox \"8343f6d6c1c999bcfe9228a36afc349d6dea4b3b7cd0c63bd94ae0fd5ab3f21d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0\"" Jan 17 00:23:27.120759 containerd[1988]: time="2026-01-17T00:23:27.120725259Z" level=info msg="StartContainer for \"55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0\"" Jan 17 00:23:27.174952 systemd[1]: run-containerd-runc-k8s.io-55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0-runc.4EjrEu.mount: Deactivated successfully. Jan 17 00:23:27.185330 systemd[1]: Started cri-containerd-55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0.scope - libcontainer container 55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0. Jan 17 00:23:27.232378 containerd[1988]: time="2026-01-17T00:23:27.232237043Z" level=info msg="StartContainer for \"55640519f101782651f0ac874d71b7f4df70453d6ea6ab886623b37254f6d7c0\" returns successfully" Jan 17 00:23:27.934343 kubelet[2435]: E0117 00:23:27.934294 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:28.934832 kubelet[2435]: E0117 00:23:28.934772 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:29.935338 kubelet[2435]: E0117 00:23:29.935275 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:30.936265 kubelet[2435]: E0117 00:23:30.936224 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:31.936892 kubelet[2435]: E0117 00:23:31.936823 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:32.070983 kubelet[2435]: E0117 00:23:32.070911 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:23:32.088768 kubelet[2435]: I0117 00:23:32.088709 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.321064023 podStartE2EDuration="11.088692853s" podCreationTimestamp="2026-01-17 00:23:21 +0000 UTC" firstStartedPulling="2026-01-17 00:23:22.237116461 +0000 UTC m=+42.956099074" lastFinishedPulling="2026-01-17 00:23:27.004745307 +0000 UTC m=+47.723727904" observedRunningTime="2026-01-17 00:23:27.336228166 +0000 UTC m=+48.055210787" watchObservedRunningTime="2026-01-17 00:23:32.088692853 +0000 UTC m=+52.807675472" Jan 17 00:23:32.937974 kubelet[2435]: E0117 00:23:32.937916 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:33.938876 kubelet[2435]: E0117 00:23:33.938825 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:34.939671 kubelet[2435]: E0117 00:23:34.939604 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:35.940592 kubelet[2435]: E0117 00:23:35.940550 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:36.940719 kubelet[2435]: E0117 00:23:36.940659 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:37.941831 kubelet[2435]: E0117 00:23:37.941788 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:38.942382 kubelet[2435]: E0117 00:23:38.942310 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:39.897800 kubelet[2435]: E0117 00:23:39.897750 2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:39.942998 kubelet[2435]: E0117 00:23:39.942946 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:39.955861 containerd[1988]: time="2026-01-17T00:23:39.954672237Z" level=info msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.005 [WARNING][3961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-csi--node--driver--xt5mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e6245a-3565-410a-9759-aa4637ef8b01", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698", Pod:"csi-node-driver-xt5mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie82798c9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.006 [INFO][3961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.006 [INFO][3961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" iface="eth0" netns="" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.006 [INFO][3961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.006 [INFO][3961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.163 [INFO][3969] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.163 [INFO][3969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.163 [INFO][3969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.170 [WARNING][3969] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.170 [INFO][3969] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.174 [INFO][3969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:40.177559 containerd[1988]: 2026-01-17 00:23:40.175 [INFO][3961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.178090 containerd[1988]: time="2026-01-17T00:23:40.177584264Z" level=info msg="TearDown network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" successfully" Jan 17 00:23:40.178090 containerd[1988]: time="2026-01-17T00:23:40.177608812Z" level=info msg="StopPodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" returns successfully" Jan 17 00:23:40.184236 containerd[1988]: time="2026-01-17T00:23:40.184186438Z" level=info msg="RemovePodSandbox for \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" Jan 17 00:23:40.184236 containerd[1988]: time="2026-01-17T00:23:40.184234941Z" level=info msg="Forcibly stopping sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\"" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.321 [WARNING][3986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-csi--node--driver--xt5mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e6245a-3565-410a-9759-aa4637ef8b01", ResourceVersion:"1410", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"48138043d79b562b5bba6482edbcb22cd001a4e7b57e1c65887f371ae0333698", Pod:"csi-node-driver-xt5mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie82798c9c86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.321 [INFO][3986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.321 [INFO][3986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" iface="eth0" netns="" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.321 [INFO][3986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.321 [INFO][3986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.345 [INFO][3994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.345 [INFO][3994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.345 [INFO][3994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.353 [WARNING][3994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.353 [INFO][3994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" HandleID="k8s-pod-network.b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Workload="172.31.25.162-k8s-csi--node--driver--xt5mp-eth0" Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.355 [INFO][3994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:40.358495 containerd[1988]: 2026-01-17 00:23:40.357 [INFO][3986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1" Jan 17 00:23:40.359695 containerd[1988]: time="2026-01-17T00:23:40.358541834Z" level=info msg="TearDown network for sandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" successfully" Jan 17 00:23:40.375568 containerd[1988]: time="2026-01-17T00:23:40.375510412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:40.375751 containerd[1988]: time="2026-01-17T00:23:40.375595364Z" level=info msg="RemovePodSandbox \"b9762ca5c0588eae73bd427834be295f47ae995a920f52c6d9ad3a37f5ac27a1\" returns successfully" Jan 17 00:23:40.390701 containerd[1988]: time="2026-01-17T00:23:40.390658812Z" level=info msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.449 [WARNING][4008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2b3611d7-019c-41b2-aded-aea4ea617491", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0", Pod:"nginx-deployment-bb8f74bfb-69hsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic36d82aafa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.450 [INFO][4008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.450 [INFO][4008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" iface="eth0" netns="" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.450 [INFO][4008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.450 [INFO][4008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.475 [INFO][4015] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.475 [INFO][4015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.475 [INFO][4015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.490 [WARNING][4015] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.490 [INFO][4015] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.493 [INFO][4015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:40.496905 containerd[1988]: 2026-01-17 00:23:40.495 [INFO][4008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.496905 containerd[1988]: time="2026-01-17T00:23:40.496723302Z" level=info msg="TearDown network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" successfully" Jan 17 00:23:40.496905 containerd[1988]: time="2026-01-17T00:23:40.496748216Z" level=info msg="StopPodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" returns successfully" Jan 17 00:23:40.498312 containerd[1988]: time="2026-01-17T00:23:40.497289083Z" level=info msg="RemovePodSandbox for \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" Jan 17 00:23:40.498312 containerd[1988]: time="2026-01-17T00:23:40.497314553Z" level=info msg="Forcibly stopping sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\"" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.557 [WARNING][4029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"2b3611d7-019c-41b2-aded-aea4ea617491", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"bb2ade6fe9dd2c98d4c5b7a15ae6f9a2e774110d867d85d5a65ebf9fa7dc71c0", Pod:"nginx-deployment-bb8f74bfb-69hsg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic36d82aafa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.557 [INFO][4029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.557 [INFO][4029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" iface="eth0" netns="" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.557 [INFO][4029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.557 [INFO][4029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.581 [INFO][4036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.582 [INFO][4036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.582 [INFO][4036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.622 [WARNING][4036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.622 [INFO][4036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" HandleID="k8s-pod-network.27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Workload="172.31.25.162-k8s-nginx--deployment--bb8f74bfb--69hsg-eth0" Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.634 [INFO][4036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:40.637158 containerd[1988]: 2026-01-17 00:23:40.635 [INFO][4029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97" Jan 17 00:23:40.637852 containerd[1988]: time="2026-01-17T00:23:40.637182375Z" level=info msg="TearDown network for sandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" successfully" Jan 17 00:23:40.640819 containerd[1988]: time="2026-01-17T00:23:40.640701695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:40.640819 containerd[1988]: time="2026-01-17T00:23:40.640786051Z" level=info msg="RemovePodSandbox \"27bd03bddf132ab12245e99e2a639dd15c0193a6aeb2aa30330c98feae600f97\" returns successfully" Jan 17 00:23:40.943858 kubelet[2435]: E0117 00:23:40.943732 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:41.944431 kubelet[2435]: E0117 00:23:41.944379 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:42.945145 kubelet[2435]: E0117 00:23:42.945071 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:43.945808 kubelet[2435]: E0117 00:23:43.945756 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:44.946296 kubelet[2435]: E0117 00:23:44.946238 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:45.947411 kubelet[2435]: E0117 00:23:45.947338 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:46.948148 kubelet[2435]: E0117 00:23:46.948068 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:47.072314 containerd[1988]: time="2026-01-17T00:23:47.072255112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:23:47.369457 containerd[1988]: time="2026-01-17T00:23:47.369325192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:47.370821 containerd[1988]: time="2026-01-17T00:23:47.370764646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:23:47.370959 containerd[1988]: time="2026-01-17T00:23:47.370856869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:23:47.371383 kubelet[2435]: E0117 00:23:47.371037 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:47.371484 kubelet[2435]: E0117 00:23:47.371393 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:47.373225 kubelet[2435]: E0117 00:23:47.372863 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:47.373750 containerd[1988]: time="2026-01-17T00:23:47.373724236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:23:47.573762 systemd[1]: Created slice kubepods-besteffort-pod3e08fa10_d909_4229_9da1_186041cf5bb8.slice - libcontainer container kubepods-besteffort-pod3e08fa10_d909_4229_9da1_186041cf5bb8.slice. Jan 17 00:23:47.672079 containerd[1988]: time="2026-01-17T00:23:47.671962937Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:47.673566 containerd[1988]: time="2026-01-17T00:23:47.673476415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:23:47.673731 containerd[1988]: time="2026-01-17T00:23:47.673615254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:23:47.673872 kubelet[2435]: E0117 00:23:47.673815 2435 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:47.673972 kubelet[2435]: E0117 00:23:47.673873 2435 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:47.674023 kubelet[2435]: E0117 00:23:47.673980 2435 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-xt5mp_calico-system(c4e6245a-3565-410a-9759-aa4637ef8b01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:47.674105 kubelet[2435]: E0117 00:23:47.674041 2435 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xt5mp" podUID="c4e6245a-3565-410a-9759-aa4637ef8b01" Jan 17 00:23:47.693441 kubelet[2435]: I0117 00:23:47.693313 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84e47cc6-1958-409e-a6cf-86cd8954e038\" (UniqueName: \"kubernetes.io/nfs/3e08fa10-d909-4229-9da1-186041cf5bb8-pvc-84e47cc6-1958-409e-a6cf-86cd8954e038\") pod \"test-pod-1\" (UID: \"3e08fa10-d909-4229-9da1-186041cf5bb8\") " pod="default/test-pod-1" Jan 17 00:23:47.693441 kubelet[2435]: I0117 00:23:47.693374 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d72z4\" (UniqueName: \"kubernetes.io/projected/3e08fa10-d909-4229-9da1-186041cf5bb8-kube-api-access-d72z4\") pod \"test-pod-1\" (UID: \"3e08fa10-d909-4229-9da1-186041cf5bb8\") " pod="default/test-pod-1" Jan 17 00:23:47.888157 kernel: FS-Cache: Loaded Jan 17 00:23:47.948423 kubelet[2435]: E0117 00:23:47.948358 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:47.998690 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:23:47.998817 kernel: RPC: Registered udp transport module. Jan 17 00:23:47.998852 kernel: RPC: Registered tcp transport module. Jan 17 00:23:47.999488 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:23:48.000768 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:23:48.346471 kernel: NFS: Registering the id_resolver key type Jan 17 00:23:48.346604 kernel: Key type id_resolver registered Jan 17 00:23:48.347289 kernel: Key type id_legacy registered Jan 17 00:23:48.423042 nfsidmap[4086]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:23:48.427670 nfsidmap[4087]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:23:48.482238 containerd[1988]: time="2026-01-17T00:23:48.482195360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3e08fa10-d909-4229-9da1-186041cf5bb8,Namespace:default,Attempt:0,}" Jan 17 00:23:48.612937 (udev-worker)[4080]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:48.614555 systemd-networkd[1808]: cali5ec59c6bf6e: Link UP Jan 17 00:23:48.616607 systemd-networkd[1808]: cali5ec59c6bf6e: Gained carrier Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.533 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.162-k8s-test--pod--1-eth0 default 3e08fa10-d909-4229-9da1-186041cf5bb8 1493 0 2026-01-17 00:23:23 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.162 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.534 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.569 [INFO][4100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" HandleID="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Workload="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.569 [INFO][4100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" HandleID="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Workload="172.31.25.162-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.162", "pod":"test-pod-1", "timestamp":"2026-01-17 00:23:48.568997873 +0000 UTC"}, Hostname:"172.31.25.162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.569 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.569 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.569 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.162' Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.577 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.584 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.590 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.592 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.595 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.64/26 host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.595 [INFO][4100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.64/26 handle="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.597 [INFO][4100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.601 [INFO][4100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.64/26 handle="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.608 [INFO][4100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.68/26] block=192.168.33.64/26 handle="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.609 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.68/26] handle="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" host="172.31.25.162" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.609 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.609 [INFO][4100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.68/26] IPv6=[] ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" HandleID="k8s-pod-network.0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Workload="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.626903 containerd[1988]: 2026-01-17 00:23:48.610 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3e08fa10-d909-4229-9da1-186041cf5bb8", ResourceVersion:"1493", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:48.627689 containerd[1988]: 2026-01-17 00:23:48.611 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.68/32] ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.627689 containerd[1988]: 2026-01-17 00:23:48.611 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.627689 containerd[1988]: 2026-01-17 00:23:48.615 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.627689 containerd[1988]: 2026-01-17 00:23:48.615 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.162-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3e08fa10-d909-4229-9da1-186041cf5bb8", ResourceVersion:"1493", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.162", ContainerID:"0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.33.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"f6:ba:73:c8:b2:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:48.627689 containerd[1988]: 2026-01-17 00:23:48.624 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.162-k8s-test--pod--1-eth0" Jan 17 00:23:48.649038 containerd[1988]: time="2026-01-17T00:23:48.648570959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:48.649038 containerd[1988]: time="2026-01-17T00:23:48.648639330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:48.649038 containerd[1988]: time="2026-01-17T00:23:48.648661463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:48.649038 containerd[1988]: time="2026-01-17T00:23:48.648769995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:48.677640 systemd[1]: Started cri-containerd-0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f.scope - libcontainer container 0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f. Jan 17 00:23:48.725047 containerd[1988]: time="2026-01-17T00:23:48.724968230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3e08fa10-d909-4229-9da1-186041cf5bb8,Namespace:default,Attempt:0,} returns sandbox id \"0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f\"" Jan 17 00:23:48.726565 containerd[1988]: time="2026-01-17T00:23:48.726487230Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:23:48.948984 kubelet[2435]: E0117 00:23:48.948911 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:49.044281 containerd[1988]: time="2026-01-17T00:23:49.044228438Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:49.048174 containerd[1988]: time="2026-01-17T00:23:49.045786922Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:23:49.054723 containerd[1988]: time="2026-01-17T00:23:49.054519658Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 327.95336ms" Jan 17 00:23:49.054885 containerd[1988]: time="2026-01-17T00:23:49.054727005Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 00:23:49.060968 containerd[1988]: time="2026-01-17T00:23:49.060923208Z" level=info msg="CreateContainer within sandbox \"0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:23:49.088526 containerd[1988]: time="2026-01-17T00:23:49.088461648Z" level=info msg="CreateContainer within sandbox \"0e4ac6f3b0e410fdb62bfbe37dc9f9b5df7a16dda80b94f1900771a0a6b7c89f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd\"" Jan 17 00:23:49.089427 containerd[1988]: time="2026-01-17T00:23:49.089396289Z" level=info msg="StartContainer for \"11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd\"" Jan 17 00:23:49.135490 systemd[1]: Started cri-containerd-11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd.scope - libcontainer container 11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd. Jan 17 00:23:49.168605 containerd[1988]: time="2026-01-17T00:23:49.168468150Z" level=info msg="StartContainer for \"11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd\" returns successfully" Jan 17 00:23:49.341377 kubelet[2435]: I0117 00:23:49.341223 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=26.011143123 podStartE2EDuration="26.341202081s" podCreationTimestamp="2026-01-17 00:23:23 +0000 UTC" firstStartedPulling="2026-01-17 00:23:48.72600877 +0000 UTC m=+69.444991371" lastFinishedPulling="2026-01-17 00:23:49.056067729 +0000 UTC m=+69.775050329" observedRunningTime="2026-01-17 00:23:49.340182526 +0000 UTC m=+70.059165219" watchObservedRunningTime="2026-01-17 00:23:49.341202081 +0000 UTC m=+70.060184701" Jan 17 00:23:49.806593 systemd[1]: run-containerd-runc-k8s.io-11aa5261c792fb86e2cf4000f642a60029b843f8ca03293b5028710ab3b542bd-runc.9hKQ4b.mount: Deactivated successfully. Jan 17 00:23:49.927807 systemd-networkd[1808]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 00:23:49.949812 kubelet[2435]: E0117 00:23:49.949758 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:50.950971 kubelet[2435]: E0117 00:23:50.950897 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:51.951663 kubelet[2435]: E0117 00:23:51.951609 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:52.440070 ntpd[1957]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:23:52.440486 ntpd[1957]: 17 Jan 00:23:52 ntpd[1957]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:23:52.952839 kubelet[2435]: E0117 00:23:52.952763 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:53.952969 kubelet[2435]: E0117 00:23:53.952892 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:54.954071 kubelet[2435]: E0117 00:23:54.953922 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:55.954867 kubelet[2435]: E0117 00:23:55.954817 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:56.955921 kubelet[2435]: E0117 00:23:56.955840 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:23:57.956646 kubelet[2435]: E0117 00:23:57.956606 2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"