Jan 24 00:55:24.929371 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:55:24.929399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:24.929412 kernel: BIOS-provided physical RAM map: Jan 24 00:55:24.929419 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:55:24.929425 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:55:24.929431 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:55:24.929439 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:55:24.929447 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:55:24.929453 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:55:24.929463 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:55:24.929470 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:55:24.929477 kernel: NX (Execute Disable) protection: active Jan 24 00:55:24.929484 kernel: APIC: Static calls initialized Jan 24 00:55:24.929491 kernel: efi: EFI v2.7 by EDK II Jan 24 00:55:24.929500 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:55:24.929510 kernel: SMBIOS 2.7 present. Jan 24 00:55:24.929518 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:55:24.929526 kernel: Hypervisor detected: KVM Jan 24 00:55:24.929533 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:55:24.929541 kernel: kvm-clock: using sched offset of 5200337832 cycles Jan 24 00:55:24.929549 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:55:24.929557 kernel: tsc: Detected 2499.996 MHz processor Jan 24 00:55:24.929565 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:55:24.929573 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:55:24.929581 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:55:24.929592 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:55:24.929600 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:55:24.929608 kernel: Using GB pages for direct mapping Jan 24 00:55:24.929616 kernel: Secure boot disabled Jan 24 00:55:24.929624 kernel: ACPI: Early table checksum verification disabled Jan 24 00:55:24.929632 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:55:24.929640 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:55:24.929648 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:55:24.929656 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:55:24.929666 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:55:24.929674 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:55:24.929682 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:55:24.929690 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:55:24.929697 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:55:24.929705 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:55:24.929717 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:55:24.929728 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:55:24.929736 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:55:24.929744 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:55:24.929752 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:55:24.929761 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:55:24.929769 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:55:24.929777 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:55:24.930211 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:55:24.930226 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:55:24.930234 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:55:24.930243 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:55:24.930251 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:55:24.930259 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:55:24.930268 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:55:24.930276 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:55:24.930285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:55:24.930297 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:55:24.930306 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:55:24.930314 kernel: Zone ranges: Jan 24 00:55:24.930323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:55:24.930331 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:55:24.930339 kernel: Normal empty Jan 24 00:55:24.930348 kernel: Movable zone start for each node Jan 24 00:55:24.930356 kernel: Early memory node ranges Jan 24 00:55:24.930364 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:55:24.930375 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:55:24.930383 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:55:24.930392 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:55:24.930400 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:55:24.930408 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:55:24.930417 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:55:24.930425 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:55:24.930434 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:55:24.930442 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:55:24.930451 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:55:24.930462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:55:24.930471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:55:24.930479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:55:24.930487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:55:24.930496 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:55:24.930504 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:55:24.930512 kernel: TSC deadline timer available Jan 24 00:55:24.930520 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:55:24.930529 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:55:24.930539 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:55:24.930548 kernel: Booting paravirtualized kernel on KVM Jan 24 00:55:24.930556 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:55:24.930565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:55:24.930573 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:55:24.930581 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:55:24.930589 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:55:24.930598 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:55:24.930606 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:55:24.930618 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:24.930627 kernel: random: crng init done Jan 24 00:55:24.930636 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:55:24.930644 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:55:24.930652 kernel: Fallback order for Node 0: 0 Jan 24 00:55:24.930661 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:55:24.930669 kernel: Policy zone: DMA32 Jan 24 00:55:24.930677 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:55:24.930689 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 24 00:55:24.930698 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:55:24.930706 kernel: Kernel/User page tables isolation: enabled Jan 24 00:55:24.930714 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:55:24.930723 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:55:24.930731 kernel: Dynamic Preempt: voluntary Jan 24 00:55:24.930739 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:55:24.930749 kernel: rcu: RCU event tracing is enabled. Jan 24 00:55:24.930758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:55:24.930769 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:55:24.930777 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:55:24.930786 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:55:24.930808 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:55:24.930816 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:55:24.930825 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:55:24.930833 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:55:24.930854 kernel: Console: colour dummy device 80x25 Jan 24 00:55:24.930867 kernel: printk: console [tty0] enabled Jan 24 00:55:24.930879 kernel: printk: console [ttyS0] enabled Jan 24 00:55:24.930888 kernel: ACPI: Core revision 20230628 Jan 24 00:55:24.930898 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:55:24.930910 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:55:24.930919 kernel: x2apic enabled Jan 24 00:55:24.930928 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:55:24.930937 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 24 00:55:24.930946 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 24 00:55:24.930958 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:55:24.930967 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:55:24.930976 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:55:24.930985 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:55:24.930993 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:55:24.931002 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:55:24.931011 kernel: RETBleed: Vulnerable Jan 24 00:55:24.931020 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:55:24.931029 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:55:24.931037 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:55:24.931049 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:55:24.931057 kernel: active return thunk: its_return_thunk Jan 24 00:55:24.931066 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:55:24.931075 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:55:24.931083 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:55:24.931092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:55:24.931101 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:55:24.931110 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:55:24.931118 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:55:24.931127 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:55:24.931136 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:55:24.931147 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:55:24.931156 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:55:24.931165 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:55:24.931174 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:55:24.931183 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:55:24.931191 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:55:24.931200 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:55:24.931209 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:55:24.931218 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:55:24.931226 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:55:24.931235 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:55:24.931246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:55:24.931255 kernel: landlock: Up and running. Jan 24 00:55:24.931264 kernel: SELinux: Initializing. Jan 24 00:55:24.931273 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:55:24.931282 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:55:24.931291 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:55:24.931299 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:24.931309 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:24.931318 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:55:24.931327 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:55:24.931339 kernel: signal: max sigframe size: 3632 Jan 24 00:55:24.931348 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:55:24.931357 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:55:24.931366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:55:24.931375 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:55:24.931384 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:55:24.931392 kernel: .... node #0, CPUs: #1 Jan 24 00:55:24.931402 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:55:24.931411 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:55:24.931422 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:55:24.931431 kernel: smpboot: Max logical packages: 1 Jan 24 00:55:24.931441 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 24 00:55:24.931450 kernel: devtmpfs: initialized Jan 24 00:55:24.931458 kernel: x86/mm: Memory block size: 128MB Jan 24 00:55:24.931467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:55:24.931476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:55:24.931485 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:55:24.931494 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:55:24.931506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:55:24.931515 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:55:24.931524 kernel: audit: type=2000 audit(1769216124.883:1): state=initialized audit_enabled=0 res=1 Jan 24 00:55:24.931533 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:55:24.931542 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:55:24.931550 kernel: cpuidle: using governor menu Jan 24 00:55:24.931559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:55:24.931568 kernel: dca service started, version 1.12.1 Jan 24 00:55:24.931577 kernel: PCI: Using configuration type 1 for base access Jan 24 00:55:24.931589 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:55:24.931598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:55:24.931607 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:55:24.931615 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:55:24.931624 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:55:24.931633 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:55:24.931642 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:55:24.931651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:55:24.931660 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:55:24.931672 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:55:24.931681 kernel: ACPI: Interpreter enabled Jan 24 00:55:24.931690 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:55:24.931699 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:55:24.931708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:55:24.931717 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:55:24.931725 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:55:24.931734 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:55:24.934196 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:55:24.934320 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:55:24.934414 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:55:24.934426 kernel: acpiphp: Slot [3] registered Jan 24 00:55:24.934435 kernel: acpiphp: Slot [4] registered Jan 24 00:55:24.934444 kernel: acpiphp: Slot [5] registered Jan 24 00:55:24.934453 kernel: acpiphp: Slot [6] registered Jan 24 00:55:24.934462 kernel: acpiphp: Slot [7] registered Jan 24 00:55:24.934475 kernel: acpiphp: Slot [8] registered Jan 24 00:55:24.934483 kernel: acpiphp: Slot [9] registered Jan 24 00:55:24.934493 kernel: acpiphp: Slot [10] registered Jan 24 00:55:24.934502 kernel: acpiphp: Slot [11] registered Jan 24 00:55:24.934511 kernel: acpiphp: Slot [12] registered Jan 24 00:55:24.934520 kernel: acpiphp: Slot [13] registered Jan 24 00:55:24.934529 kernel: acpiphp: Slot [14] registered Jan 24 00:55:24.934538 kernel: acpiphp: Slot [15] registered Jan 24 00:55:24.934547 kernel: acpiphp: Slot [16] registered Jan 24 00:55:24.934556 kernel: acpiphp: Slot [17] registered Jan 24 00:55:24.934568 kernel: acpiphp: Slot [18] registered Jan 24 00:55:24.934576 kernel: acpiphp: Slot [19] registered Jan 24 00:55:24.934585 kernel: acpiphp: Slot [20] registered Jan 24 00:55:24.934594 kernel: acpiphp: Slot [21] registered Jan 24 00:55:24.934603 kernel: acpiphp: Slot [22] registered Jan 24 00:55:24.934612 kernel: acpiphp: Slot [23] registered Jan 24 00:55:24.934621 kernel: acpiphp: Slot [24] registered Jan 24 00:55:24.934629 kernel: acpiphp: Slot [25] registered Jan 24 00:55:24.934638 kernel: acpiphp: Slot [26] registered Jan 24 00:55:24.934649 kernel: acpiphp: Slot [27] registered Jan 24 00:55:24.934658 kernel: acpiphp: Slot [28] registered Jan 24 00:55:24.934667 kernel: acpiphp: Slot [29] registered Jan 24 00:55:24.934676 kernel: acpiphp: Slot [30] registered Jan 24 00:55:24.934685 kernel: acpiphp: Slot [31] registered Jan 24 00:55:24.934694 kernel: PCI host bridge to bus 0000:00 Jan 24 00:55:24.934805 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:55:24.934893 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:55:24.934980 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:55:24.935062 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:55:24.935143 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:55:24.935224 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:55:24.935332 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:55:24.935433 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:55:24.935532 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:55:24.935638 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:55:24.935731 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:55:24.937055 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:55:24.937198 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:55:24.937296 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:55:24.937391 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:55:24.937483 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:55:24.937592 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:55:24.937684 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:55:24.937776 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:55:24.937960 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:55:24.938055 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:55:24.938153 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:55:24.938251 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:55:24.940024 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:55:24.940141 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:55:24.940154 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:55:24.940164 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:55:24.940173 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:55:24.940183 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:55:24.940192 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:55:24.940207 kernel: iommu: Default domain type: Translated Jan 24 00:55:24.940216 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:55:24.940225 kernel: efivars: Registered efivars operations Jan 24 00:55:24.940234 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:55:24.940243 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:55:24.940252 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:55:24.940261 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:55:24.940360 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:55:24.940457 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:55:24.940554 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:55:24.940566 kernel: vgaarb: loaded Jan 24 00:55:24.940576 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:55:24.940585 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:55:24.940594 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:55:24.940603 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:55:24.940612 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:55:24.940621 kernel: pnp: PnP ACPI init Jan 24 00:55:24.940631 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:55:24.940643 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:55:24.940652 kernel: NET: Registered PF_INET protocol family Jan 24 00:55:24.940661 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:55:24.940670 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:55:24.940680 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:55:24.940689 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:55:24.940698 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:55:24.940707 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:55:24.940718 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:55:24.940727 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:55:24.940737 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:55:24.940746 kernel: NET: Registered PF_XDP protocol family Jan 24 00:55:24.942904 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:55:24.943008 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:55:24.943093 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:55:24.943177 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:55:24.943269 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:55:24.943378 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:55:24.943390 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:55:24.943400 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:55:24.943410 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 24 00:55:24.943419 kernel: clocksource: Switched to clocksource tsc Jan 24 00:55:24.943428 kernel: Initialise system trusted keyrings Jan 24 00:55:24.943438 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:55:24.943447 kernel: Key type asymmetric registered Jan 24 00:55:24.943459 kernel: Asymmetric key parser 'x509' registered Jan 24 00:55:24.943468 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:55:24.943477 kernel: io scheduler mq-deadline registered Jan 24 00:55:24.943486 kernel: io scheduler kyber registered Jan 24 00:55:24.943495 kernel: io scheduler bfq registered Jan 24 00:55:24.943504 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:55:24.943513 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:55:24.943523 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:55:24.943532 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:55:24.943543 kernel: i8042: Warning: Keylock active Jan 24 00:55:24.943552 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:55:24.943561 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:55:24.943665 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:55:24.943755 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:55:24.943860 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:55:24 UTC (1769216124) Jan 24 00:55:24.943948 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:55:24.943960 kernel: intel_pstate: CPU model not supported Jan 24 00:55:24.943974 kernel: efifb: probing for efifb Jan 24 00:55:24.943983 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:55:24.943992 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:55:24.944001 kernel: efifb: scrolling: redraw Jan 24 00:55:24.944010 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:55:24.944019 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:55:24.944028 kernel: fb0: EFI VGA frame buffer device Jan 24 00:55:24.944038 kernel: pstore: Using crash dump compression: deflate Jan 24 00:55:24.944047 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:55:24.944059 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:55:24.944068 kernel: Segment Routing with IPv6 Jan 24 00:55:24.944077 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:55:24.944086 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:55:24.944095 kernel: Key type dns_resolver registered Jan 24 00:55:24.944104 kernel: IPI shorthand broadcast: enabled Jan 24 00:55:24.944135 kernel: sched_clock: Marking stable (456002287, 130148955)->(679473851, -93322609) Jan 24 00:55:24.944147 kernel: registered taskstats version 1 Jan 24 00:55:24.944156 kernel: Loading compiled-in X.509 certificates Jan 24 00:55:24.944168 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:55:24.944178 kernel: Key type .fscrypt registered Jan 24 00:55:24.944187 kernel: Key type fscrypt-provisioning registered Jan 24 00:55:24.944196 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:55:24.944205 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:55:24.944215 kernel: ima: No architecture policies found Jan 24 00:55:24.944224 kernel: clk: Disabling unused clocks Jan 24 00:55:24.944234 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:55:24.944243 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:55:24.944255 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:55:24.944265 kernel: Run /init as init process Jan 24 00:55:24.944274 kernel: with arguments: Jan 24 00:55:24.944284 kernel: /init Jan 24 00:55:24.944293 kernel: with environment: Jan 24 00:55:24.944302 kernel: HOME=/ Jan 24 00:55:24.944311 kernel: TERM=linux Jan 24 00:55:24.944323 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:55:24.944337 systemd[1]: Detected virtualization amazon. Jan 24 00:55:24.944347 systemd[1]: Detected architecture x86-64. Jan 24 00:55:24.944357 systemd[1]: Running in initrd. Jan 24 00:55:24.944367 systemd[1]: No hostname configured, using default hostname. Jan 24 00:55:24.944379 systemd[1]: Hostname set to . Jan 24 00:55:24.944393 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:55:24.944402 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:55:24.944415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:55:24.944428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:55:24.944439 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:55:24.944448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:55:24.944458 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:55:24.944471 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:55:24.944485 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:55:24.944495 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:55:24.944505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:55:24.944514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:55:24.944524 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:55:24.944534 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:55:24.944544 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:55:24.944557 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:55:24.944567 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:55:24.944577 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:55:24.944587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:55:24.944597 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:55:24.944607 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:55:24.944617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:55:24.944627 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:55:24.944636 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:55:24.944649 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:55:24.944659 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:55:24.944669 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:55:24.944679 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:55:24.944689 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:55:24.944699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:55:24.944709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:24.944718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:55:24.944751 systemd-journald[179]: Collecting audit messages is disabled. Jan 24 00:55:24.944777 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:55:24.944800 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:55:24.944814 systemd-journald[179]: Journal started Jan 24 00:55:24.944836 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b587038fa75469e958a5cb2a62857) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:55:24.950614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:55:24.930903 systemd-modules-load[180]: Inserted module 'overlay' Jan 24 00:55:24.954820 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:55:24.955388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:24.971830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:55:24.973163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:24.975086 kernel: Bridge firewalling registered Jan 24 00:55:24.974237 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 24 00:55:24.977962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:55:24.979354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:55:24.979917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:55:24.991004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:55:24.994268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:55:25.006012 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:55:25.006265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:25.009872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:55:25.022913 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:55:25.024040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:55:25.035052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:55:25.040138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:55:25.050634 dracut-cmdline[210]: dracut-dracut-053 Jan 24 00:55:25.055094 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:55:25.088904 systemd-resolved[213]: Positive Trust Anchors: Jan 24 00:55:25.090001 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:55:25.090070 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:55:25.095656 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 24 00:55:25.100195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:55:25.101467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:55:25.143831 kernel: SCSI subsystem initialized Jan 24 00:55:25.153826 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:55:25.164817 kernel: iscsi: registered transport (tcp) Jan 24 00:55:25.187050 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:55:25.187135 kernel: QLogic iSCSI HBA Driver Jan 24 00:55:25.226425 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:55:25.231022 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:55:25.259116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:55:25.259194 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:55:25.259218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:55:25.302823 kernel: raid6: avx512x4 gen() 18123 MB/s Jan 24 00:55:25.320819 kernel: raid6: avx512x2 gen() 17736 MB/s Jan 24 00:55:25.338818 kernel: raid6: avx512x1 gen() 17613 MB/s Jan 24 00:55:25.356815 kernel: raid6: avx2x4 gen() 18108 MB/s Jan 24 00:55:25.374817 kernel: raid6: avx2x2 gen() 17824 MB/s Jan 24 00:55:25.393078 kernel: raid6: avx2x1 gen() 13729 MB/s Jan 24 00:55:25.393145 kernel: raid6: using algorithm avx512x4 gen() 18123 MB/s Jan 24 00:55:25.412115 kernel: raid6: .... xor() 7435 MB/s, rmw enabled Jan 24 00:55:25.412184 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:55:25.433825 kernel: xor: automatically using best checksumming function avx Jan 24 00:55:25.593827 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:55:25.604533 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:55:25.609008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:55:25.631886 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:55:25.637249 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:55:25.648051 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:55:25.666398 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 24 00:55:25.698086 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:55:25.704028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:55:25.757022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:55:25.762557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:55:25.795001 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:55:25.797494 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:55:25.799907 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:55:25.801186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:55:25.809121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:55:25.828561 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:55:25.854541 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:55:25.854809 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:55:25.859813 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:55:25.874809 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:55:25.890005 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:7e:e5:07:8a:1b Jan 24 00:55:25.898889 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:55:25.898959 kernel: AES CTR mode by8 optimization enabled Jan 24 00:55:25.897448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:55:25.897618 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:25.898569 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:25.899199 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:25.899397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:25.900036 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:25.900446 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:55:25.913387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:25.930827 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:55:25.934239 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:55:25.937150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:25.938130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:25.947077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:25.954840 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:55:25.963627 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:55:25.963696 kernel: GPT:9289727 != 33554431 Jan 24 00:55:25.963716 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:55:25.963746 kernel: GPT:9289727 != 33554431 Jan 24 00:55:25.963764 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:55:25.963782 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:55:25.969956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:25.981009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:55:25.995749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:26.107912 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (447) Jan 24 00:55:26.135576 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:55:26.144888 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (456) Jan 24 00:55:26.152440 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:55:26.187469 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:55:26.188073 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:55:26.195619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:55:26.201982 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:55:26.208888 disk-uuid[633]: Primary Header is updated. Jan 24 00:55:26.208888 disk-uuid[633]: Secondary Entries is updated. Jan 24 00:55:26.208888 disk-uuid[633]: Secondary Header is updated. Jan 24 00:55:26.214814 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:55:26.219815 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:55:27.230253 disk-uuid[634]: The operation has completed successfully. Jan 24 00:55:27.231107 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:55:27.343152 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:55:27.343294 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:55:27.360059 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:55:27.363906 sh[977]: Success Jan 24 00:55:27.385864 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:55:27.482965 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:55:27.490925 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:55:27.495498 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:55:27.523954 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:55:27.524025 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:27.525884 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:55:27.528654 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:55:27.528705 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:55:27.653867 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:55:27.679196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:55:27.680273 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:55:27.685029 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:55:27.688040 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:55:27.715165 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:27.715241 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:27.720073 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:55:27.737279 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:55:27.753262 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:27.752741 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:55:27.761988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:55:27.771022 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:55:27.803092 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:55:27.810082 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:55:27.833210 systemd-networkd[1169]: lo: Link UP Jan 24 00:55:27.833222 systemd-networkd[1169]: lo: Gained carrier Jan 24 00:55:27.834990 systemd-networkd[1169]: Enumeration completed Jan 24 00:55:27.835121 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:55:27.835745 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:27.835750 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:55:27.836110 systemd[1]: Reached target network.target - Network. Jan 24 00:55:27.840413 systemd-networkd[1169]: eth0: Link UP Jan 24 00:55:27.840419 systemd-networkd[1169]: eth0: Gained carrier Jan 24 00:55:27.840433 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:27.854916 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.30.66/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:55:28.272826 ignition[1121]: Ignition 2.19.0 Jan 24 00:55:28.272837 ignition[1121]: Stage: fetch-offline Jan 24 00:55:28.273045 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:28.273054 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:28.273511 ignition[1121]: Ignition finished successfully Jan 24 00:55:28.275199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:55:28.279030 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:55:28.295044 ignition[1177]: Ignition 2.19.0 Jan 24 00:55:28.295057 ignition[1177]: Stage: fetch Jan 24 00:55:28.295395 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:28.295405 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:28.295493 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:28.320433 ignition[1177]: PUT result: OK Jan 24 00:55:28.322779 ignition[1177]: parsed url from cmdline: "" Jan 24 00:55:28.322807 ignition[1177]: no config URL provided Jan 24 00:55:28.322818 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:55:28.322842 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:55:28.322867 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:28.323637 ignition[1177]: PUT result: OK Jan 24 00:55:28.323694 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:55:28.324708 ignition[1177]: GET result: OK Jan 24 00:55:28.324770 ignition[1177]: parsing config with SHA512: db47898a981d9c72b0b23e5342932895b9e24735099ed104f13412daa7913699e611ca200195c7be7f45339a00ee488f7159efb8a5eafd75772d77ba254eb733 Jan 24 00:55:28.328728 unknown[1177]: fetched base config from "system" Jan 24 00:55:28.328742 unknown[1177]: fetched base config from "system" Jan 24 00:55:28.329168 ignition[1177]: fetch: fetch complete Jan 24 00:55:28.328750 unknown[1177]: fetched user config from "aws" Jan 24 00:55:28.329176 ignition[1177]: fetch: fetch passed Jan 24 00:55:28.331360 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:55:28.329237 ignition[1177]: Ignition finished successfully Jan 24 00:55:28.336093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:55:28.353393 ignition[1184]: Ignition 2.19.0 Jan 24 00:55:28.353408 ignition[1184]: Stage: kargs Jan 24 00:55:28.353898 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:28.353913 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:28.354027 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:28.354892 ignition[1184]: PUT result: OK Jan 24 00:55:28.357556 ignition[1184]: kargs: kargs passed Jan 24 00:55:28.357627 ignition[1184]: Ignition finished successfully Jan 24 00:55:28.359035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:55:28.363044 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:55:28.390326 ignition[1190]: Ignition 2.19.0 Jan 24 00:55:28.390339 ignition[1190]: Stage: disks Jan 24 00:55:28.390862 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:28.390878 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:28.391003 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:28.391842 ignition[1190]: PUT result: OK Jan 24 00:55:28.394443 ignition[1190]: disks: disks passed Jan 24 00:55:28.394518 ignition[1190]: Ignition finished successfully Jan 24 00:55:28.396477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:55:28.397292 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:55:28.397682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:55:28.398294 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:55:28.398874 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:55:28.399456 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:55:28.405030 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:55:28.435751 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:55:28.439682 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:55:28.449970 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:55:28.554827 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:55:28.554967 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:55:28.555951 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:55:28.575004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:55:28.578262 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:55:28.580298 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:55:28.581582 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:55:28.581623 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:55:28.595818 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Jan 24 00:55:28.598902 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:28.602963 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:28.603018 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:55:28.603156 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:55:28.608017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:55:28.638825 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:55:28.640283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:55:29.212851 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:55:29.266133 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:55:29.271386 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:55:29.276462 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:55:29.504990 systemd-networkd[1169]: eth0: Gained IPv6LL Jan 24 00:55:29.640261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:55:29.647956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:55:29.650987 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:55:29.661484 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:55:29.663832 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:29.702543 ignition[1329]: INFO : Ignition 2.19.0 Jan 24 00:55:29.703562 ignition[1329]: INFO : Stage: mount Jan 24 00:55:29.704587 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:55:29.705829 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:29.705829 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:29.705829 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:29.708261 ignition[1329]: INFO : PUT result: OK Jan 24 00:55:29.712217 ignition[1329]: INFO : mount: mount passed Jan 24 00:55:29.712217 ignition[1329]: INFO : Ignition finished successfully Jan 24 00:55:29.715173 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:55:29.719930 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:55:29.740074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:55:29.757413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1341) Jan 24 00:55:29.757484 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:55:29.760565 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:55:29.760639 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:55:29.766840 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:55:29.769305 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:55:29.791405 ignition[1358]: INFO : Ignition 2.19.0 Jan 24 00:55:29.791405 ignition[1358]: INFO : Stage: files Jan 24 00:55:29.792953 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:29.792953 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:29.792953 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:29.792953 ignition[1358]: INFO : PUT result: OK Jan 24 00:55:29.795529 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:55:29.796372 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:55:29.796372 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:55:29.833806 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:55:29.834600 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:55:29.834600 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:55:29.834210 unknown[1358]: wrote ssh authorized keys file for user: core Jan 24 00:55:29.836600 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:55:29.837336 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:55:34.395707 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 24 00:55:34.740480 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:55:34.742432 ignition[1358]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:55:34.742432 ignition[1358]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:55:34.742432 ignition[1358]: INFO : files: files passed Jan 24 00:55:34.742432 ignition[1358]: INFO : Ignition finished successfully Jan 24 00:55:34.743167 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:55:34.748028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:55:34.753966 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:55:34.759492 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:55:34.759631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:55:34.768533 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:55:34.768533 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:55:34.772533 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:55:34.773422 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:55:34.774826 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:55:34.783039 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:55:34.822208 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:55:34.822326 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:55:34.823770 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:55:34.824398 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:55:34.825356 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:55:34.837057 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:55:34.850409 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:55:34.856024 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:55:34.866636 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:55:34.867266 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:55:34.867827 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:55:34.868291 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:55:34.868420 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:55:34.869464 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:55:34.870253 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:55:34.871014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:55:34.871761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:55:34.872508 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:55:34.873382 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:55:34.874120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:55:34.874786 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:55:34.875461 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:55:34.876146 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:55:34.876776 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:55:34.876914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:55:34.878393 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:55:34.879471 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:55:34.880144 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:55:34.880256 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:55:34.880888 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:55:34.881323 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:55:34.882545 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:55:34.882659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:55:34.883337 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:55:34.883433 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:55:34.892030 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:55:34.895002 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:55:34.895870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:55:34.896031 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:55:34.899068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:55:34.899579 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:55:34.905487 ignition[1410]: INFO : Ignition 2.19.0 Jan 24 00:55:34.905487 ignition[1410]: INFO : Stage: umount Jan 24 00:55:34.905478 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:55:34.907377 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:55:34.907377 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:55:34.907377 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:55:34.908460 ignition[1410]: INFO : PUT result: OK Jan 24 00:55:34.910970 ignition[1410]: INFO : umount: umount passed Jan 24 00:55:34.911557 ignition[1410]: INFO : Ignition finished successfully Jan 24 00:55:34.912273 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:55:34.915352 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:55:34.915486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:55:34.919200 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:55:34.919314 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:55:34.919974 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:55:34.920037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:55:34.921257 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:55:34.921326 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:55:34.922919 systemd[1]: Stopped target network.target - Network. Jan 24 00:55:34.923692 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:55:34.923771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:55:34.925923 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:55:34.926661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:55:34.930869 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:55:34.931453 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:55:34.931953 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:55:34.932465 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:55:34.932525 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:55:34.933066 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:55:34.933128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:55:34.936492 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:55:34.936568 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:55:34.937175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:55:34.937243 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:55:34.938130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:55:34.939303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:55:34.941844 systemd-networkd[1169]: eth0: DHCPv6 lease lost Jan 24 00:55:34.941878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:55:34.945472 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:55:34.945614 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:55:34.947490 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:55:34.947564 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:55:34.955023 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:55:34.955592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:55:34.955669 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:55:34.956447 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:55:34.957555 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:55:34.957682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:55:34.960980 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:55:34.961256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:55:34.969335 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:55:34.969535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:55:34.973569 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:55:34.973644 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:55:34.975652 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:55:34.975703 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:55:34.976509 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:55:34.976572 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:55:34.977904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:55:34.977967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:55:34.979063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:55:34.979126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:55:34.980197 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:55:34.980256 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:55:34.985984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:55:34.986621 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:55:34.986702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:55:34.989374 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:55:34.990050 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:55:34.991410 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:55:34.991898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:55:34.992617 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:55:34.992678 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:55:34.993256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:55:34.993313 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:55:34.995647 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:55:34.995712 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:55:34.996921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:55:34.996979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:34.998158 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:55:34.998286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:55:34.999249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:55:34.999366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:55:35.001338 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:55:35.009024 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:55:35.018522 systemd[1]: Switching root. Jan 24 00:55:35.053386 systemd-journald[179]: Journal stopped Jan 24 00:55:37.055488 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 24 00:55:37.055601 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:55:37.055637 kernel: SELinux: policy capability open_perms=1 Jan 24 00:55:37.055656 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:55:37.055679 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:55:37.055698 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:55:37.055718 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:55:37.055738 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:55:37.055757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:55:37.055780 kernel: audit: type=1403 audit(1769216135.514:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:55:37.055843 systemd[1]: Successfully loaded SELinux policy in 52.139ms. Jan 24 00:55:37.055874 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.549ms. Jan 24 00:55:37.055895 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:55:37.055920 systemd[1]: Detected virtualization amazon. Jan 24 00:55:37.055939 systemd[1]: Detected architecture x86-64. Jan 24 00:55:37.055957 systemd[1]: Detected first boot. Jan 24 00:55:37.055983 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:55:37.056001 zram_generator::config[1452]: No configuration found. Jan 24 00:55:37.056034 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:55:37.056055 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:55:37.056076 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:55:37.056097 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:55:37.056120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:55:37.056141 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:55:37.056162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:55:37.056182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:55:37.056203 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:55:37.056227 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:55:37.056249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:55:37.056269 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:55:37.056290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:55:37.056312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:55:37.056333 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:55:37.056353 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:55:37.056374 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:55:37.056399 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:55:37.056420 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:55:37.056441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:55:37.056468 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:55:37.056488 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:55:37.056506 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:55:37.056524 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:55:37.056541 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:55:37.056565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:55:37.056586 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:55:37.056605 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:55:37.056625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:55:37.056644 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:55:37.056662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:55:37.056681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:55:37.056700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:55:37.056721 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:55:37.056746 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:55:37.056768 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:55:37.058551 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:55:37.058603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:37.058624 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:55:37.058643 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:55:37.058662 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:55:37.058683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:55:37.058703 systemd[1]: Reached target machines.target - Containers. Jan 24 00:55:37.058730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:55:37.058750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:55:37.058769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:55:37.059319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:55:37.059352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:55:37.059375 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:55:37.059396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:55:37.059417 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:55:37.059443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:55:37.059464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:55:37.059484 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:55:37.059503 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:55:37.059523 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:55:37.059543 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:55:37.059563 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:55:37.059584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:55:37.059606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:55:37.059631 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:55:37.059650 kernel: loop: module loaded Jan 24 00:55:37.059670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:55:37.059692 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:55:37.059712 systemd[1]: Stopped verity-setup.service. Jan 24 00:55:37.059731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:37.059753 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:55:37.059773 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:55:37.059824 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:55:37.059849 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:55:37.059871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:55:37.059891 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:55:37.059912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:55:37.059971 systemd-journald[1534]: Collecting audit messages is disabled. Jan 24 00:55:37.060012 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:55:37.060033 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:55:37.060053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:55:37.060074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:55:37.060096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:55:37.060121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:55:37.060144 systemd-journald[1534]: Journal started Jan 24 00:55:37.060185 systemd-journald[1534]: Runtime Journal (/run/log/journal/ec2b587038fa75469e958a5cb2a62857) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:55:36.626755 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:55:36.730424 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:55:37.066942 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:55:36.730931 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:55:37.069089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:55:37.069297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:55:37.071430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:55:37.074329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:55:37.076455 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:55:37.096820 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:55:37.099653 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:55:37.122978 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:55:37.123670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:55:37.123720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:55:37.125898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:55:37.130056 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:55:37.140028 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:55:37.140960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:55:37.147057 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:55:37.150634 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:55:37.151529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:55:37.161484 kernel: ACPI: bus type drm_connector registered Jan 24 00:55:37.163038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:55:37.163883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:55:37.167387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:55:37.178299 kernel: fuse: init (API version 7.39) Jan 24 00:55:37.173012 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:55:37.177412 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:55:37.182985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:55:37.184063 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:55:37.185099 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:55:37.185693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:55:37.188890 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:55:37.189091 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:55:37.207758 systemd-journald[1534]: Time spent on flushing to /var/log/journal/ec2b587038fa75469e958a5cb2a62857 is 107.809ms for 965 entries. Jan 24 00:55:37.207758 systemd-journald[1534]: System Journal (/var/log/journal/ec2b587038fa75469e958a5cb2a62857) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:55:37.333486 systemd-journald[1534]: Received client request to flush runtime journal. Jan 24 00:55:37.333566 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:55:37.208334 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:55:37.220193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:55:37.239092 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:55:37.240063 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:55:37.252007 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:55:37.284649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:55:37.292036 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:55:37.342122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:55:37.355982 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:55:37.359163 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:55:37.364975 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:55:37.366454 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Jan 24 00:55:37.366477 systemd-tmpfiles[1579]: ACLs are not supported, ignoring. Jan 24 00:55:37.381185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:55:37.383998 udevadm[1590]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:55:37.393932 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:55:37.446038 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:55:37.468214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:55:37.474878 kernel: loop1: detected capacity change from 0 to 219144 Jan 24 00:55:37.481011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:55:37.511199 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 24 00:55:37.511613 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 24 00:55:37.519104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:55:37.829888 kernel: loop2: detected capacity change from 0 to 61336 Jan 24 00:55:38.009978 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:55:38.097373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:55:38.105450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:55:38.133916 systemd-udevd[1611]: Using default interface naming scheme 'v255'. Jan 24 00:55:38.237064 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:55:38.250503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:55:38.262114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:55:38.281356 kernel: loop5: detected capacity change from 0 to 219144 Jan 24 00:55:38.306570 kernel: loop6: detected capacity change from 0 to 61336 Jan 24 00:55:38.317593 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:55:38.363514 (udev-worker)[1621]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:55:38.371817 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:55:38.409200 (sd-merge)[1613]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:55:38.411692 (sd-merge)[1613]: Merged extensions into '/usr'. Jan 24 00:55:38.424456 systemd[1]: Reloading requested from client PID 1578 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:55:38.424474 systemd[1]: Reloading... Jan 24 00:55:38.613238 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:55:38.613322 zram_generator::config[1674]: No configuration found. Jan 24 00:55:38.615819 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:55:38.630242 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:55:38.630339 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 24 00:55:38.630366 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:55:38.654257 systemd-networkd[1619]: lo: Link UP Jan 24 00:55:38.654269 systemd-networkd[1619]: lo: Gained carrier Jan 24 00:55:38.663111 systemd-networkd[1619]: Enumeration completed Jan 24 00:55:38.666666 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:38.666679 systemd-networkd[1619]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:55:38.672492 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:38.672547 systemd-networkd[1619]: eth0: Link UP Jan 24 00:55:38.672724 systemd-networkd[1619]: eth0: Gained carrier Jan 24 00:55:38.672742 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:55:38.683919 systemd-networkd[1619]: eth0: DHCPv4 address 172.31.30.66/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:55:38.696839 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1626) Jan 24 00:55:38.710942 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:55:38.897862 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:55:38.961805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:55:39.052038 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:55:39.052501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:55:39.053357 systemd[1]: Reloading finished in 627 ms. Jan 24 00:55:39.086523 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:55:39.087383 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:55:39.088211 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:55:39.105988 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:55:39.125229 systemd[1]: Starting ensure-sysext.service... Jan 24 00:55:39.127990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:55:39.135017 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:55:39.144827 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:55:39.148987 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:55:39.154050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:55:39.163958 systemd[1]: Reloading requested from client PID 1807 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:55:39.163981 systemd[1]: Reloading... Jan 24 00:55:39.192880 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:55:39.208181 lvm[1808]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:55:39.219836 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:55:39.220440 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:55:39.225389 systemd-tmpfiles[1811]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:55:39.228947 systemd-tmpfiles[1811]: ACLs are not supported, ignoring. Jan 24 00:55:39.229063 systemd-tmpfiles[1811]: ACLs are not supported, ignoring. Jan 24 00:55:39.245403 systemd-tmpfiles[1811]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:55:39.245423 systemd-tmpfiles[1811]: Skipping /boot Jan 24 00:55:39.276260 systemd-tmpfiles[1811]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:55:39.276283 systemd-tmpfiles[1811]: Skipping /boot Jan 24 00:55:39.293814 zram_generator::config[1845]: No configuration found. Jan 24 00:55:39.447124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:55:39.523617 systemd[1]: Reloading finished in 359 ms. Jan 24 00:55:39.543514 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:55:39.549403 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:55:39.550179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:55:39.551337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:55:39.552176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:55:39.560875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:55:39.572240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:55:39.575894 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:55:39.579164 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:55:39.586080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:55:39.599502 lvm[1909]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:55:39.596103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:55:39.604115 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:55:39.608495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.608699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:55:39.616665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:55:39.621154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:55:39.625312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:55:39.626571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:55:39.626711 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.627785 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:55:39.629300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:55:39.636857 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:55:39.639535 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.641659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:55:39.644153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:55:39.645112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:55:39.645253 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.649620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:55:39.649887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:55:39.651433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:55:39.651741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:55:39.662358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.662639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:55:39.662820 augenrules[1935]: No rules Jan 24 00:55:39.670068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:55:39.676388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:55:39.679870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:55:39.680396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:55:39.680572 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:55:39.681942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:55:39.683159 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:55:39.684380 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:55:39.690086 systemd[1]: Finished ensure-sysext.service. Jan 24 00:55:39.693464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:55:39.694844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:55:39.699975 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:55:39.700676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:55:39.715735 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:55:39.726331 systemd-resolved[1916]: Positive Trust Anchors: Jan 24 00:55:39.726552 systemd-resolved[1916]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:55:39.726608 systemd-resolved[1916]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:55:39.728199 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:55:39.730024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:55:39.730407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:55:39.731499 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:55:39.731816 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:55:39.736590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:55:39.736675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:55:39.745674 systemd-resolved[1916]: Defaulting to hostname 'linux'. Jan 24 00:55:39.750111 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:55:39.751156 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:55:39.751857 systemd[1]: Reached target network.target - Network. Jan 24 00:55:39.752319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:55:39.771057 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:55:39.771709 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:55:39.771743 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:55:39.772327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:55:39.772767 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:55:39.773420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:55:39.773879 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:55:39.774219 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:55:39.774543 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:55:39.774579 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:55:39.774928 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:55:39.776285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:55:39.778082 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:55:39.783886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:55:39.784983 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:55:39.785637 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:55:39.786013 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:55:39.786379 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:55:39.786413 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:55:39.787524 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:55:39.791009 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:55:39.795976 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:55:39.798939 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:55:39.802405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:55:39.803907 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:55:39.805952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:55:39.809992 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:55:39.814920 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:55:39.817030 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:55:39.820055 jq[1959]: false Jan 24 00:55:39.820974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:55:39.826957 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:55:39.827693 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:55:39.828301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:55:39.829229 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:55:39.838968 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:55:39.848175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:55:39.848362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:55:39.899239 jq[1969]: true Jan 24 00:55:39.900508 extend-filesystems[1960]: Found loop4 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found loop5 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found loop6 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found loop7 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p1 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p2 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p3 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found usr Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p4 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p6 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p7 Jan 24 00:55:39.900508 extend-filesystems[1960]: Found nvme0n1p9 Jan 24 00:55:39.938898 extend-filesystems[1960]: Checking size of /dev/nvme0n1p9 Jan 24 00:55:39.921626 dbus-daemon[1958]: [system] SELinux support is enabled Jan 24 00:55:39.907713 (ntainerd)[1985]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:55:39.948824 update_engine[1968]: I20260124 00:55:39.907029 1968 main.cc:92] Flatcar Update Engine starting Jan 24 00:55:39.948824 update_engine[1968]: I20260124 00:55:39.935369 1968 update_check_scheduler.cc:74] Next update check in 2m23s Jan 24 00:55:39.925409 dbus-daemon[1958]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1619 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:55:39.916104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:55:39.945300 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:55:39.916278 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:55:39.927140 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:55:39.954654 jq[1988]: true Jan 24 00:55:39.932265 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:55:39.932786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:55:39.945628 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:55:39.950857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:55:39.950882 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:55:39.963016 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:55:39.963888 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:55:39.963919 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:55:39.970115 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:55:39.979037 systemd-logind[1967]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:55:39.979061 systemd-logind[1967]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 24 00:55:39.979078 systemd-logind[1967]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:55:39.981055 systemd-logind[1967]: New seat seat0. Jan 24 00:55:39.984058 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:55:40.003327 ntpd[1962]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: ---------------------------------------------------- Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: corporation. Support and training for ntp-4 are Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: available at https://www.nwtime.org/support Jan 24 00:55:40.004184 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: ---------------------------------------------------- Jan 24 00:55:40.003352 ntpd[1962]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:55:40.004299 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:55:40.003359 ntpd[1962]: ---------------------------------------------------- Jan 24 00:55:40.003367 ntpd[1962]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:55:40.003376 ntpd[1962]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:55:40.003383 ntpd[1962]: corporation. Support and training for ntp-4 are Jan 24 00:55:40.003390 ntpd[1962]: available at https://www.nwtime.org/support Jan 24 00:55:40.003396 ntpd[1962]: ---------------------------------------------------- Jan 24 00:55:40.008218 ntpd[1962]: proto: precision = 0.054 usec (-24) Jan 24 00:55:40.008437 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: proto: precision = 0.054 usec (-24) Jan 24 00:55:40.008955 ntpd[1962]: basedate set to 2026-01-11 Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: basedate set to 2026-01-11 Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: gps base set to 2026-01-11 (week 2401) Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:55:40.013905 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listen normally on 3 eth0 172.31.30.66:123 Jan 24 00:55:40.008973 ntpd[1962]: gps base set to 2026-01-11 (week 2401) Jan 24 00:55:40.013532 ntpd[1962]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:55:40.013582 ntpd[1962]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:55:40.013735 ntpd[1962]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:55:40.013762 ntpd[1962]: Listen normally on 3 eth0 172.31.30.66:123 Jan 24 00:55:40.014216 ntpd[1962]: Listen normally on 4 lo [::1]:123 Jan 24 00:55:40.014264 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listen normally on 4 lo [::1]:123 Jan 24 00:55:40.014331 ntpd[1962]: bind(21) AF_INET6 fe80::47e:e5ff:fe07:8a1b%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:55:40.014392 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: bind(21) AF_INET6 fe80::47e:e5ff:fe07:8a1b%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:55:40.014433 ntpd[1962]: unable to create socket on eth0 (5) for fe80::47e:e5ff:fe07:8a1b%2#123 Jan 24 00:55:40.014488 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: unable to create socket on eth0 (5) for fe80::47e:e5ff:fe07:8a1b%2#123 Jan 24 00:55:40.014517 ntpd[1962]: failed to init interface for address fe80::47e:e5ff:fe07:8a1b%2 Jan 24 00:55:40.014555 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: failed to init interface for address fe80::47e:e5ff:fe07:8a1b%2 Jan 24 00:55:40.014605 ntpd[1962]: Listening on routing socket on fd #21 for interface updates Jan 24 00:55:40.014645 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: Listening on routing socket on fd #21 for interface updates Jan 24 00:55:40.016298 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:55:40.019817 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:55:40.019817 ntpd[1962]: 24 Jan 00:55:40 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:55:40.018866 ntpd[1962]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:55:40.021023 coreos-metadata[1957]: Jan 24 00:55:40.020 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:55:40.021268 extend-filesystems[1960]: Resized partition /dev/nvme0n1p9 Jan 24 00:55:40.025055 coreos-metadata[1957]: Jan 24 00:55:40.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:55:40.025458 extend-filesystems[2021]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:55:40.026079 coreos-metadata[1957]: Jan 24 00:55:40.025 INFO Fetch successful Jan 24 00:55:40.026740 coreos-metadata[1957]: Jan 24 00:55:40.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:55:40.030813 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:55:40.030864 coreos-metadata[1957]: Jan 24 00:55:40.029 INFO Fetch successful Jan 24 00:55:40.030864 coreos-metadata[1957]: Jan 24 00:55:40.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:55:40.030864 coreos-metadata[1957]: Jan 24 00:55:40.030 INFO Fetch successful Jan 24 00:55:40.030864 coreos-metadata[1957]: Jan 24 00:55:40.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:55:40.031087 coreos-metadata[1957]: Jan 24 00:55:40.031 INFO Fetch successful Jan 24 00:55:40.031149 coreos-metadata[1957]: Jan 24 00:55:40.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:55:40.031666 coreos-metadata[1957]: Jan 24 00:55:40.031 INFO Fetch failed with 404: resource not found Jan 24 00:55:40.031715 coreos-metadata[1957]: Jan 24 00:55:40.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:55:40.033358 coreos-metadata[1957]: Jan 24 00:55:40.033 INFO Fetch successful Jan 24 00:55:40.033358 coreos-metadata[1957]: Jan 24 00:55:40.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:55:40.034894 coreos-metadata[1957]: Jan 24 00:55:40.034 INFO Fetch successful Jan 24 00:55:40.034975 coreos-metadata[1957]: Jan 24 00:55:40.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:55:40.035409 coreos-metadata[1957]: Jan 24 00:55:40.035 INFO Fetch successful Jan 24 00:55:40.035456 coreos-metadata[1957]: Jan 24 00:55:40.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:55:40.036081 coreos-metadata[1957]: Jan 24 00:55:40.036 INFO Fetch successful Jan 24 00:55:40.036148 coreos-metadata[1957]: Jan 24 00:55:40.036 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:55:40.037421 coreos-metadata[1957]: Jan 24 00:55:40.037 INFO Fetch successful Jan 24 00:55:40.098966 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1615) Jan 24 00:55:40.124146 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:55:40.125647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:55:40.161599 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:55:40.161829 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:55:40.173321 dbus-daemon[1958]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1995 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:55:40.189168 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:55:40.201342 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:55:40.205477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:55:40.224312 polkitd[2062]: Started polkitd version 121 Jan 24 00:55:40.226991 systemd[1]: Starting sshkeys.service... Jan 24 00:55:40.230819 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:55:40.248384 extend-filesystems[2021]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:55:40.248384 extend-filesystems[2021]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:55:40.248384 extend-filesystems[2021]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:55:40.253880 extend-filesystems[1960]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:55:40.253572 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:55:40.253845 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:55:40.261427 polkitd[2062]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:55:40.263227 polkitd[2062]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:55:40.267099 polkitd[2062]: Finished loading, compiling and executing 2 rules Jan 24 00:55:40.272641 dbus-daemon[1958]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:55:40.272880 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:55:40.276128 polkitd[2062]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:55:40.306673 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:55:40.315271 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:55:40.322584 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:55:40.357857 systemd-hostnamed[1995]: Hostname set to (transient) Jan 24 00:55:40.358001 systemd-resolved[1916]: System hostname changed to 'ip-172-31-30-66'. Jan 24 00:55:40.385030 systemd-networkd[1619]: eth0: Gained IPv6LL Jan 24 00:55:40.393908 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:55:40.395446 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:55:40.407286 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:55:40.419639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:40.432401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:55:40.552759 coreos-metadata[2095]: Jan 24 00:55:40.552 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:55:40.567819 coreos-metadata[2095]: Jan 24 00:55:40.566 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:55:40.568321 coreos-metadata[2095]: Jan 24 00:55:40.568 INFO Fetch successful Jan 24 00:55:40.568321 coreos-metadata[2095]: Jan 24 00:55:40.568 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:55:40.571667 coreos-metadata[2095]: Jan 24 00:55:40.568 INFO Fetch successful Jan 24 00:55:40.587934 unknown[2095]: wrote ssh authorized keys file for user: core Jan 24 00:55:40.604722 amazon-ssm-agent[2123]: Initializing new seelog logger Jan 24 00:55:40.610074 amazon-ssm-agent[2123]: New Seelog Logger Creation Complete Jan 24 00:55:40.610074 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.610074 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.610074 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 processing appconfig overrides Jan 24 00:55:40.610866 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:55:40.611601 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.611688 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.611934 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 processing appconfig overrides Jan 24 00:55:40.616291 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO Proxy environment variables: Jan 24 00:55:40.616700 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.616700 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.616700 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 processing appconfig overrides Jan 24 00:55:40.622506 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.622634 amazon-ssm-agent[2123]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:55:40.624814 amazon-ssm-agent[2123]: 2026/01/24 00:55:40 processing appconfig overrides Jan 24 00:55:40.672085 update-ssh-keys[2165]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:55:40.673671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:55:40.677020 systemd[1]: Finished sshkeys.service. Jan 24 00:55:40.717355 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO http_proxy: Jan 24 00:55:40.818098 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO no_proxy: Jan 24 00:55:40.858417 containerd[1985]: time="2026-01-24T00:55:40.858315986Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:55:40.916293 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO https_proxy: Jan 24 00:55:40.962495 containerd[1985]: time="2026-01-24T00:55:40.962431711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.971499 containerd[1985]: time="2026-01-24T00:55:40.971444395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:55:40.971499 containerd[1985]: time="2026-01-24T00:55:40.971495961Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:55:40.971670 containerd[1985]: time="2026-01-24T00:55:40.971521533Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:55:40.971766 containerd[1985]: time="2026-01-24T00:55:40.971718847Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:55:40.971766 containerd[1985]: time="2026-01-24T00:55:40.971747673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.971865 containerd[1985]: time="2026-01-24T00:55:40.971839640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:55:40.971865 containerd[1985]: time="2026-01-24T00:55:40.971859995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972118 containerd[1985]: time="2026-01-24T00:55:40.972090080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972167 containerd[1985]: time="2026-01-24T00:55:40.972119584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972167 containerd[1985]: time="2026-01-24T00:55:40.972140318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972167 containerd[1985]: time="2026-01-24T00:55:40.972159714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972818 containerd[1985]: time="2026-01-24T00:55:40.972274117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972818 containerd[1985]: time="2026-01-24T00:55:40.972527617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972818 containerd[1985]: time="2026-01-24T00:55:40.972680548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:55:40.972818 containerd[1985]: time="2026-01-24T00:55:40.972701615Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:55:40.975579 containerd[1985]: time="2026-01-24T00:55:40.974825011Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:55:40.975579 containerd[1985]: time="2026-01-24T00:55:40.974946744Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:55:40.980785 containerd[1985]: time="2026-01-24T00:55:40.980748146Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:55:40.980884 containerd[1985]: time="2026-01-24T00:55:40.980837727Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:55:40.980884 containerd[1985]: time="2026-01-24T00:55:40.980864115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:55:40.980956 containerd[1985]: time="2026-01-24T00:55:40.980929387Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:55:40.981011 containerd[1985]: time="2026-01-24T00:55:40.980954543Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:55:40.981169 containerd[1985]: time="2026-01-24T00:55:40.981146443Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:55:40.983086 containerd[1985]: time="2026-01-24T00:55:40.983062914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:55:40.983246 containerd[1985]: time="2026-01-24T00:55:40.983226505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:55:40.983292 containerd[1985]: time="2026-01-24T00:55:40.983254267Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:55:40.983292 containerd[1985]: time="2026-01-24T00:55:40.983276372Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:55:40.983367 containerd[1985]: time="2026-01-24T00:55:40.983300761Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983367 containerd[1985]: time="2026-01-24T00:55:40.983321341Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983367 containerd[1985]: time="2026-01-24T00:55:40.983340432Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983367 containerd[1985]: time="2026-01-24T00:55:40.983361747Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983383072Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983406491Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983428684Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983447595Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983475791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983519 containerd[1985]: time="2026-01-24T00:55:40.983498613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983517850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983538184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983556546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983576433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983594162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983615173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983635242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983661823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983681088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983700148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.983736 containerd[1985]: time="2026-01-24T00:55:40.983718535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983744670Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983775772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983814525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983832134Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983903253Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.983931124Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984031085Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984050029Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984064844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984098340Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984118107Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:55:40.984875 containerd[1985]: time="2026-01-24T00:55:40.984133998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:55:40.985313 containerd[1985]: time="2026-01-24T00:55:40.984546788Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:55:40.985313 containerd[1985]: time="2026-01-24T00:55:40.984627802Z" level=info msg="Connect containerd service" Jan 24 00:55:40.985313 containerd[1985]: time="2026-01-24T00:55:40.984684901Z" level=info msg="using legacy CRI server" Jan 24 00:55:40.985313 containerd[1985]: time="2026-01-24T00:55:40.984695687Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:55:40.985313 containerd[1985]: time="2026-01-24T00:55:40.984899417Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989161176Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989302944Z" level=info msg="Start subscribing containerd event" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989361605Z" level=info msg="Start recovering state" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989441557Z" level=info msg="Start event monitor" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989456311Z" level=info msg="Start snapshots syncer" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989469876Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.989482265Z" level=info msg="Start streaming server" Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.990082242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:55:40.990324 containerd[1985]: time="2026-01-24T00:55:40.990151463Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:55:40.992956 sshd_keygen[2012]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:55:40.994596 containerd[1985]: time="2026-01-24T00:55:40.993876579Z" level=info msg="containerd successfully booted in 0.138538s" Jan 24 00:55:40.996940 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:55:41.015095 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:55:41.049123 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:55:41.058445 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:55:41.070134 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:55:41.070395 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:55:41.080309 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:55:41.099188 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:55:41.111200 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:55:41.113362 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:55:41.114261 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:55:41.114927 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:55:41.214225 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO Agent will take identity from EC2 Jan 24 00:55:41.313504 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:55:41.413121 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:55:41.478635 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:55:41.478635 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:55:41.478635 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:55:41.478635 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:55:41.478635 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [Registrar] Starting registrar module Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:40 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:41 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:41 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:41 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:55:41.478909 amazon-ssm-agent[2123]: 2026-01-24 00:55:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:55:41.512519 amazon-ssm-agent[2123]: 2026-01-24 00:55:41 INFO [CredentialRefresher] Next credential rotation will be in 30.1916613947 minutes Jan 24 00:55:42.492537 amazon-ssm-agent[2123]: 2026-01-24 00:55:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:55:42.592946 amazon-ssm-agent[2123]: 2026-01-24 00:55:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2194) started Jan 24 00:55:42.693445 amazon-ssm-agent[2123]: 2026-01-24 00:55:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:55:43.003851 ntpd[1962]: Listen normally on 6 eth0 [fe80::47e:e5ff:fe07:8a1b%2]:123 Jan 24 00:55:43.004262 ntpd[1962]: 24 Jan 00:55:43 ntpd[1962]: Listen normally on 6 eth0 [fe80::47e:e5ff:fe07:8a1b%2]:123 Jan 24 00:55:45.245898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:45.246847 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:55:45.248175 systemd[1]: Startup finished in 584ms (kernel) + 10.810s (initrd) + 9.782s (userspace) = 21.177s. Jan 24 00:55:45.252482 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:55:46.643420 kubelet[2210]: E0124 00:55:46.643345 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:55:46.646163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:55:46.646326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:55:47.991209 systemd-resolved[1916]: Clock change detected. Flushing caches. Jan 24 00:55:50.519593 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:55:50.529987 systemd[1]: Started sshd@0-172.31.30.66:22-4.153.228.146:46120.service - OpenSSH per-connection server daemon (4.153.228.146:46120). Jan 24 00:55:51.033342 sshd[2222]: Accepted publickey for core from 4.153.228.146 port 46120 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:51.035489 sshd[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:51.045621 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:55:51.057913 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:55:51.062093 systemd-logind[1967]: New session 1 of user core. Jan 24 00:55:51.074213 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:55:51.079847 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:55:51.091242 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:55:51.211576 systemd[2226]: Queued start job for default target default.target. Jan 24 00:55:51.226983 systemd[2226]: Created slice app.slice - User Application Slice. Jan 24 00:55:51.227031 systemd[2226]: Reached target paths.target - Paths. Jan 24 00:55:51.227052 systemd[2226]: Reached target timers.target - Timers. Jan 24 00:55:51.228613 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:55:51.248039 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:55:51.248175 systemd[2226]: Reached target sockets.target - Sockets. Jan 24 00:55:51.248190 systemd[2226]: Reached target basic.target - Basic System. Jan 24 00:55:51.248233 systemd[2226]: Reached target default.target - Main User Target. Jan 24 00:55:51.248263 systemd[2226]: Startup finished in 149ms. Jan 24 00:55:51.248553 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:55:51.258733 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:55:51.623877 systemd[1]: Started sshd@1-172.31.30.66:22-4.153.228.146:46126.service - OpenSSH per-connection server daemon (4.153.228.146:46126). Jan 24 00:55:52.102680 sshd[2237]: Accepted publickey for core from 4.153.228.146 port 46126 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:52.104716 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:52.109373 systemd-logind[1967]: New session 2 of user core. Jan 24 00:55:52.116773 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:55:52.452715 sshd[2237]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:52.455168 systemd[1]: sshd@1-172.31.30.66:22-4.153.228.146:46126.service: Deactivated successfully. Jan 24 00:55:52.456969 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:55:52.458229 systemd-logind[1967]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:55:52.459462 systemd-logind[1967]: Removed session 2. Jan 24 00:55:52.540917 systemd[1]: Started sshd@2-172.31.30.66:22-4.153.228.146:46130.service - OpenSSH per-connection server daemon (4.153.228.146:46130). Jan 24 00:55:53.023041 sshd[2244]: Accepted publickey for core from 4.153.228.146 port 46130 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:53.024603 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:53.028965 systemd-logind[1967]: New session 3 of user core. Jan 24 00:55:53.039771 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:55:53.367076 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:53.370580 systemd[1]: sshd@2-172.31.30.66:22-4.153.228.146:46130.service: Deactivated successfully. Jan 24 00:55:53.372285 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:55:53.373435 systemd-logind[1967]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:55:53.374821 systemd-logind[1967]: Removed session 3. Jan 24 00:55:53.465684 systemd[1]: Started sshd@3-172.31.30.66:22-4.153.228.146:46142.service - OpenSSH per-connection server daemon (4.153.228.146:46142). Jan 24 00:55:53.985853 sshd[2251]: Accepted publickey for core from 4.153.228.146 port 46142 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:53.987545 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:53.993164 systemd-logind[1967]: New session 4 of user core. Jan 24 00:55:54.012535 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:55:54.360740 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:54.363848 systemd[1]: sshd@3-172.31.30.66:22-4.153.228.146:46142.service: Deactivated successfully. Jan 24 00:55:54.365463 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:55:54.366582 systemd-logind[1967]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:55:54.367676 systemd-logind[1967]: Removed session 4. Jan 24 00:55:54.452264 systemd[1]: Started sshd@4-172.31.30.66:22-4.153.228.146:46154.service - OpenSSH per-connection server daemon (4.153.228.146:46154). Jan 24 00:55:54.971558 sshd[2258]: Accepted publickey for core from 4.153.228.146 port 46154 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:54.973181 sshd[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:54.978276 systemd-logind[1967]: New session 5 of user core. Jan 24 00:55:54.984832 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:55:55.353395 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:55:55.353816 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:55:55.370620 sudo[2261]: pam_unix(sudo:session): session closed for user root Jan 24 00:55:55.453672 sshd[2258]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:55.457155 systemd[1]: sshd@4-172.31.30.66:22-4.153.228.146:46154.service: Deactivated successfully. Jan 24 00:55:55.458842 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:55:55.460181 systemd-logind[1967]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:55:55.461701 systemd-logind[1967]: Removed session 5. Jan 24 00:55:55.537544 systemd[1]: Started sshd@5-172.31.30.66:22-4.153.228.146:54230.service - OpenSSH per-connection server daemon (4.153.228.146:54230). Jan 24 00:55:56.028279 sshd[2266]: Accepted publickey for core from 4.153.228.146 port 54230 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:56.030277 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:56.036581 systemd-logind[1967]: New session 6 of user core. Jan 24 00:55:56.046783 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:55:56.306591 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:55:56.306910 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:55:56.311095 sudo[2270]: pam_unix(sudo:session): session closed for user root Jan 24 00:55:56.316916 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:55:56.317317 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:55:56.331900 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:55:56.334633 auditctl[2273]: No rules Jan 24 00:55:56.335046 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:55:56.335261 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:55:56.341979 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:55:56.369050 augenrules[2291]: No rules Jan 24 00:55:56.370549 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:55:56.372265 sudo[2269]: pam_unix(sudo:session): session closed for user root Jan 24 00:55:56.450308 sshd[2266]: pam_unix(sshd:session): session closed for user core Jan 24 00:55:56.453991 systemd[1]: sshd@5-172.31.30.66:22-4.153.228.146:54230.service: Deactivated successfully. Jan 24 00:55:56.455493 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:55:56.456182 systemd-logind[1967]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:55:56.457143 systemd-logind[1967]: Removed session 6. Jan 24 00:55:56.535388 systemd[1]: Started sshd@6-172.31.30.66:22-4.153.228.146:54242.service - OpenSSH per-connection server daemon (4.153.228.146:54242). Jan 24 00:55:57.017960 sshd[2299]: Accepted publickey for core from 4.153.228.146 port 54242 ssh2: RSA SHA256:0D12HA53sI4/9PpTTH/bXSI7GIU12SGWaAG4pdts0Tg Jan 24 00:55:57.019762 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:55:57.025638 systemd-logind[1967]: New session 7 of user core. Jan 24 00:55:57.030742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:55:57.291471 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:55:57.291902 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:55:57.883709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:55:57.892879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:58.091657 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:55:58.091771 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:55:58.092221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:58.099917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:58.135244 systemd[1]: Reloading requested from client PID 2339 ('systemctl') (unit session-7.scope)... Jan 24 00:55:58.135262 systemd[1]: Reloading... Jan 24 00:55:58.278540 zram_generator::config[2379]: No configuration found. Jan 24 00:55:58.423491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:55:58.509534 systemd[1]: Reloading finished in 373 ms. Jan 24 00:55:58.562449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:58.576160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:58.577027 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:55:58.577361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:58.579285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:55:58.838180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:55:58.854039 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:55:58.896769 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:55:58.896769 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:55:58.897153 kubelet[2444]: I0124 00:55:58.896828 2444 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:55:59.292000 kubelet[2444]: I0124 00:55:59.291857 2444 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:55:59.292000 kubelet[2444]: I0124 00:55:59.291885 2444 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:55:59.292000 kubelet[2444]: I0124 00:55:59.291913 2444 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:55:59.292000 kubelet[2444]: I0124 00:55:59.291925 2444 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:55:59.292259 kubelet[2444]: I0124 00:55:59.292239 2444 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:55:59.297793 kubelet[2444]: I0124 00:55:59.297461 2444 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:55:59.301415 kubelet[2444]: E0124 00:55:59.301361 2444 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:55:59.301552 kubelet[2444]: I0124 00:55:59.301441 2444 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:55:59.306893 kubelet[2444]: I0124 00:55:59.305959 2444 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:55:59.306893 kubelet[2444]: I0124 00:55:59.306236 2444 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:55:59.306893 kubelet[2444]: I0124 00:55:59.306266 2444 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.30.66","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:55:59.306893 kubelet[2444]: I0124 00:55:59.306608 2444 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:55:59.307138 kubelet[2444]: I0124 00:55:59.306624 2444 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:55:59.307138 kubelet[2444]: I0124 00:55:59.306746 2444 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:55:59.312299 kubelet[2444]: I0124 00:55:59.312259 2444 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:55:59.314643 kubelet[2444]: I0124 00:55:59.314010 2444 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:55:59.314643 kubelet[2444]: I0124 00:55:59.314039 2444 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:55:59.314643 kubelet[2444]: I0124 00:55:59.314064 2444 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:55:59.314643 kubelet[2444]: I0124 00:55:59.314080 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:55:59.314643 kubelet[2444]: E0124 00:55:59.314524 2444 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:59.314643 kubelet[2444]: E0124 00:55:59.314571 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:55:59.316628 kubelet[2444]: I0124 00:55:59.316602 2444 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:55:59.317193 kubelet[2444]: I0124 00:55:59.317155 2444 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:55:59.317193 kubelet[2444]: I0124 00:55:59.317191 2444 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:55:59.317360 kubelet[2444]: W0124 00:55:59.317286 2444 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:55:59.320135 kubelet[2444]: I0124 00:55:59.320118 2444 server.go:1262] "Started kubelet" Jan 24 00:55:59.321829 kubelet[2444]: I0124 00:55:59.321236 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:55:59.326865 kubelet[2444]: E0124 00:55:59.326840 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.30.66\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:55:59.327212 kubelet[2444]: E0124 00:55:59.327185 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:55:59.329500 kubelet[2444]: E0124 00:55:59.327882 2444 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.66.188d84b923278cd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.66,UID:172.31.30.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.30.66,},FirstTimestamp:2026-01-24 00:55:59.320083671 +0000 UTC m=+0.462077306,LastTimestamp:2026-01-24 00:55:59.320083671 +0000 UTC m=+0.462077306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.66,}" Jan 24 00:55:59.333514 kubelet[2444]: I0124 00:55:59.333470 2444 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:55:59.334280 kubelet[2444]: I0124 00:55:59.334252 2444 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.337703 2444 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.337749 2444 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.337895 2444 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.338139 2444 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.338267 2444 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:55:59.339287 kubelet[2444]: E0124 00:55:59.338471 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.338680 2444 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:55:59.339287 kubelet[2444]: I0124 00:55:59.338749 2444 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:55:59.339795 kubelet[2444]: E0124 00:55:59.339776 2444 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:55:59.340149 kubelet[2444]: E0124 00:55:59.340060 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.30.66\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 24 00:55:59.340555 kubelet[2444]: E0124 00:55:59.340539 2444 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:55:59.340833 kubelet[2444]: I0124 00:55:59.340814 2444 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:55:59.340996 kubelet[2444]: I0124 00:55:59.340981 2444 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:55:59.342316 kubelet[2444]: I0124 00:55:59.342296 2444 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:55:59.348355 kubelet[2444]: E0124 00:55:59.347696 2444 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.66.188d84b9245f72c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.66,UID:172.31.30.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.30.66,},FirstTimestamp:2026-01-24 00:55:59.340524228 +0000 UTC m=+0.482517857,LastTimestamp:2026-01-24 00:55:59.340524228 +0000 UTC m=+0.482517857,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.66,}" Jan 24 00:55:59.350609 kubelet[2444]: I0124 00:55:59.350375 2444 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:55:59.350609 kubelet[2444]: I0124 00:55:59.350389 2444 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:55:59.350609 kubelet[2444]: I0124 00:55:59.350404 2444 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:55:59.354343 kubelet[2444]: I0124 00:55:59.354053 2444 policy_none.go:49] "None policy: Start" Jan 24 00:55:59.354343 kubelet[2444]: I0124 00:55:59.354078 2444 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:55:59.354343 kubelet[2444]: I0124 00:55:59.354093 2444 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:55:59.356990 kubelet[2444]: E0124 00:55:59.356738 2444 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.66.188d84b924e23a0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.66,UID:172.31.30.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.30.66 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.30.66,},FirstTimestamp:2026-01-24 00:55:59.349094926 +0000 UTC m=+0.491088539,LastTimestamp:2026-01-24 00:55:59.349094926 +0000 UTC m=+0.491088539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.66,}" Jan 24 00:55:59.358562 kubelet[2444]: E0124 00:55:59.357838 2444 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.66.188d84b924e27253 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.66,UID:172.31.30.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.30.66 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.30.66,},FirstTimestamp:2026-01-24 00:55:59.349109331 +0000 UTC m=+0.491102944,LastTimestamp:2026-01-24 00:55:59.349109331 +0000 UTC m=+0.491102944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.66,}" Jan 24 00:55:59.361899 kubelet[2444]: I0124 00:55:59.361879 2444 policy_none.go:47] "Start" Jan 24 00:55:59.364881 kubelet[2444]: E0124 00:55:59.364390 2444 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.66.188d84b924e280e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.66,UID:172.31.30.66,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.30.66 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.30.66,},FirstTimestamp:2026-01-24 00:55:59.349113063 +0000 UTC m=+0.491106676,LastTimestamp:2026-01-24 00:55:59.349113063 +0000 UTC m=+0.491106676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.66,}" Jan 24 00:55:59.370259 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:55:59.391172 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:55:59.398131 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:55:59.402761 kubelet[2444]: E0124 00:55:59.402718 2444 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:55:59.402992 kubelet[2444]: I0124 00:55:59.402968 2444 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:55:59.403063 kubelet[2444]: I0124 00:55:59.402988 2444 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:55:59.403752 kubelet[2444]: I0124 00:55:59.403722 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:55:59.407137 kubelet[2444]: E0124 00:55:59.407105 2444 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:55:59.407228 kubelet[2444]: E0124 00:55:59.407160 2444 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.30.66\" not found" Jan 24 00:55:59.483259 kubelet[2444]: I0124 00:55:59.483093 2444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:55:59.485113 kubelet[2444]: I0124 00:55:59.484651 2444 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:55:59.485113 kubelet[2444]: I0124 00:55:59.484677 2444 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:55:59.485113 kubelet[2444]: I0124 00:55:59.484701 2444 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:55:59.485113 kubelet[2444]: E0124 00:55:59.484790 2444 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 24 00:55:59.509031 kubelet[2444]: I0124 00:55:59.508981 2444 kubelet_node_status.go:75] "Attempting to register node" node="172.31.30.66" Jan 24 00:55:59.514930 kubelet[2444]: I0124 00:55:59.514893 2444 kubelet_node_status.go:78] "Successfully registered node" node="172.31.30.66" Jan 24 00:55:59.514930 kubelet[2444]: E0124 00:55:59.514932 2444 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172.31.30.66\": node \"172.31.30.66\" not found" Jan 24 00:55:59.536200 kubelet[2444]: E0124 00:55:59.536169 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:55:59.637348 kubelet[2444]: E0124 00:55:59.637222 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:55:59.737984 kubelet[2444]: E0124 00:55:59.737940 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:55:59.839152 kubelet[2444]: E0124 00:55:59.839113 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:55:59.939868 kubelet[2444]: E0124 00:55:59.939746 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.040447 kubelet[2444]: E0124 00:56:00.040391 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.141389 kubelet[2444]: E0124 00:56:00.141335 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.242387 kubelet[2444]: E0124 00:56:00.242265 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.294851 kubelet[2444]: I0124 00:56:00.294803 2444 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 24 00:56:00.295020 kubelet[2444]: I0124 00:56:00.294995 2444 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 24 00:56:00.315269 kubelet[2444]: E0124 00:56:00.315200 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:00.342673 kubelet[2444]: E0124 00:56:00.342615 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.357089 sudo[2302]: pam_unix(sudo:session): session closed for user root Jan 24 00:56:00.433561 sshd[2299]: pam_unix(sshd:session): session closed for user core Jan 24 00:56:00.437451 systemd[1]: sshd@6-172.31.30.66:22-4.153.228.146:54242.service: Deactivated successfully. Jan 24 00:56:00.439106 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:56:00.440122 systemd-logind[1967]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:56:00.441693 systemd-logind[1967]: Removed session 7. Jan 24 00:56:00.443162 kubelet[2444]: E0124 00:56:00.443064 2444 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172.31.30.66\" not found" Jan 24 00:56:00.544629 kubelet[2444]: I0124 00:56:00.544524 2444 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 24 00:56:00.545412 kubelet[2444]: I0124 00:56:00.545186 2444 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 24 00:56:00.545466 containerd[1985]: time="2026-01-24T00:56:00.544891382Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:56:01.315405 kubelet[2444]: I0124 00:56:01.315335 2444 apiserver.go:52] "Watching apiserver" Jan 24 00:56:01.315405 kubelet[2444]: E0124 00:56:01.315356 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:01.351109 kubelet[2444]: E0124 00:56:01.351059 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:01.370633 systemd[1]: Created slice kubepods-besteffort-pod8ec96274_9ae9_407e_b53b_b5f4429ba00b.slice - libcontainer container kubepods-besteffort-pod8ec96274_9ae9_407e_b53b_b5f4429ba00b.slice. Jan 24 00:56:01.413104 systemd[1]: Created slice kubepods-besteffort-podc6623f57_8a3f_4d6a_8dba_5fd45c29df38.slice - libcontainer container kubepods-besteffort-podc6623f57_8a3f_4d6a_8dba_5fd45c29df38.slice. Jan 24 00:56:01.439851 kubelet[2444]: I0124 00:56:01.439800 2444 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:56:01.453340 kubelet[2444]: I0124 00:56:01.452121 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-flexvol-driver-host\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453340 kubelet[2444]: I0124 00:56:01.452244 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-lib-modules\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453340 kubelet[2444]: I0124 00:56:01.452332 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-policysync\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453340 kubelet[2444]: I0124 00:56:01.452462 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-var-lib-calico\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453340 kubelet[2444]: I0124 00:56:01.452635 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-var-run-calico\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453842 kubelet[2444]: I0124 00:56:01.452728 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/46fbf850-4138-4783-94c3-ae492c179748-varrun\") pod \"csi-node-driver-2x7tg\" (UID: \"46fbf850-4138-4783-94c3-ae492c179748\") " pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:01.453842 kubelet[2444]: I0124 00:56:01.452871 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6623f57-8a3f-4d6a-8dba-5fd45c29df38-kube-proxy\") pod \"kube-proxy-fssg2\" (UID: \"c6623f57-8a3f-4d6a-8dba-5fd45c29df38\") " pod="kube-system/kube-proxy-fssg2" Jan 24 00:56:01.453842 kubelet[2444]: I0124 00:56:01.453200 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6623f57-8a3f-4d6a-8dba-5fd45c29df38-lib-modules\") pod \"kube-proxy-fssg2\" (UID: \"c6623f57-8a3f-4d6a-8dba-5fd45c29df38\") " pod="kube-system/kube-proxy-fssg2" Jan 24 00:56:01.453842 kubelet[2444]: I0124 00:56:01.453276 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-cni-bin-dir\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.453842 kubelet[2444]: I0124 00:56:01.453304 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-cni-net-dir\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.461806 kubelet[2444]: I0124 00:56:01.453338 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ec96274-9ae9-407e-b53b-b5f4429ba00b-node-certs\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.461806 kubelet[2444]: I0124 00:56:01.453383 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46fbf850-4138-4783-94c3-ae492c179748-kubelet-dir\") pod \"csi-node-driver-2x7tg\" (UID: \"46fbf850-4138-4783-94c3-ae492c179748\") " pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:01.461806 kubelet[2444]: I0124 00:56:01.453411 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kml\" (UniqueName: \"kubernetes.io/projected/46fbf850-4138-4783-94c3-ae492c179748-kube-api-access-f9kml\") pod \"csi-node-driver-2x7tg\" (UID: \"46fbf850-4138-4783-94c3-ae492c179748\") " pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:01.461806 kubelet[2444]: I0124 00:56:01.453431 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6623f57-8a3f-4d6a-8dba-5fd45c29df38-xtables-lock\") pod \"kube-proxy-fssg2\" (UID: \"c6623f57-8a3f-4d6a-8dba-5fd45c29df38\") " pod="kube-system/kube-proxy-fssg2" Jan 24 00:56:01.461806 kubelet[2444]: I0124 00:56:01.453475 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec96274-9ae9-407e-b53b-b5f4429ba00b-tigera-ca-bundle\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.462426 kubelet[2444]: I0124 00:56:01.453496 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46fbf850-4138-4783-94c3-ae492c179748-registration-dir\") pod \"csi-node-driver-2x7tg\" (UID: \"46fbf850-4138-4783-94c3-ae492c179748\") " pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:01.462426 kubelet[2444]: I0124 00:56:01.453642 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-cni-log-dir\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.462426 kubelet[2444]: I0124 00:56:01.453674 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ec96274-9ae9-407e-b53b-b5f4429ba00b-xtables-lock\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.462426 kubelet[2444]: I0124 00:56:01.453714 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vvtk\" (UniqueName: \"kubernetes.io/projected/8ec96274-9ae9-407e-b53b-b5f4429ba00b-kube-api-access-5vvtk\") pod \"calico-node-rwvk2\" (UID: \"8ec96274-9ae9-407e-b53b-b5f4429ba00b\") " pod="calico-system/calico-node-rwvk2" Jan 24 00:56:01.462426 kubelet[2444]: I0124 00:56:01.453737 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46fbf850-4138-4783-94c3-ae492c179748-socket-dir\") pod \"csi-node-driver-2x7tg\" (UID: \"46fbf850-4138-4783-94c3-ae492c179748\") " pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:01.462686 kubelet[2444]: I0124 00:56:01.453760 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcnk6\" (UniqueName: \"kubernetes.io/projected/c6623f57-8a3f-4d6a-8dba-5fd45c29df38-kube-api-access-hcnk6\") pod \"kube-proxy-fssg2\" (UID: \"c6623f57-8a3f-4d6a-8dba-5fd45c29df38\") " pod="kube-system/kube-proxy-fssg2" Jan 24 00:56:01.618402 kubelet[2444]: E0124 00:56:01.616121 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:01.618402 kubelet[2444]: W0124 00:56:01.616353 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:01.618402 kubelet[2444]: E0124 00:56:01.617931 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:01.643437 kubelet[2444]: E0124 00:56:01.643391 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:01.643713 kubelet[2444]: W0124 00:56:01.643629 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:01.643713 kubelet[2444]: E0124 00:56:01.643662 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:01.658969 kubelet[2444]: E0124 00:56:01.658870 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:01.658969 kubelet[2444]: W0124 00:56:01.658897 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:01.658969 kubelet[2444]: E0124 00:56:01.658923 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:01.705407 containerd[1985]: time="2026-01-24T00:56:01.705017708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rwvk2,Uid:8ec96274-9ae9-407e-b53b-b5f4429ba00b,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:01.749408 containerd[1985]: time="2026-01-24T00:56:01.746099096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fssg2,Uid:c6623f57-8a3f-4d6a-8dba-5fd45c29df38,Namespace:kube-system,Attempt:0,}" Jan 24 00:56:01.809559 kubelet[2444]: E0124 00:56:01.807100 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:01.809559 kubelet[2444]: W0124 00:56:01.807131 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:01.809559 kubelet[2444]: E0124 00:56:01.807160 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:02.316041 kubelet[2444]: E0124 00:56:02.315834 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:02.490336 kubelet[2444]: E0124 00:56:02.488811 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:02.772132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540764449.mount: Deactivated successfully. Jan 24 00:56:02.795275 containerd[1985]: time="2026-01-24T00:56:02.795206431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:02.798962 containerd[1985]: time="2026-01-24T00:56:02.798195172Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:02.800813 containerd[1985]: time="2026-01-24T00:56:02.800743477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:56:02.802857 containerd[1985]: time="2026-01-24T00:56:02.802799484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:56:02.810071 containerd[1985]: time="2026-01-24T00:56:02.805133278Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:02.815874 containerd[1985]: time="2026-01-24T00:56:02.815822803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:56:02.817942 containerd[1985]: time="2026-01-24T00:56:02.817674028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.112545736s" Jan 24 00:56:02.819807 containerd[1985]: time="2026-01-24T00:56:02.819706224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.072228303s" Jan 24 00:56:03.316496 kubelet[2444]: E0124 00:56:03.316383 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:03.422206 containerd[1985]: time="2026-01-24T00:56:03.422067044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:03.422206 containerd[1985]: time="2026-01-24T00:56:03.422136854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:03.422206 containerd[1985]: time="2026-01-24T00:56:03.422162091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:03.422657 containerd[1985]: time="2026-01-24T00:56:03.422264170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:03.433899 containerd[1985]: time="2026-01-24T00:56:03.433764073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:03.433899 containerd[1985]: time="2026-01-24T00:56:03.433819105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:03.433899 containerd[1985]: time="2026-01-24T00:56:03.433835404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:03.435384 containerd[1985]: time="2026-01-24T00:56:03.433939321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:03.671747 systemd[1]: Started cri-containerd-d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174.scope - libcontainer container d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174. Jan 24 00:56:03.675370 systemd[1]: Started cri-containerd-ea9e68bc19eef1a0d76536d974d487a2ba8d7b846f3dc1333e86e108fe187577.scope - libcontainer container ea9e68bc19eef1a0d76536d974d487a2ba8d7b846f3dc1333e86e108fe187577. Jan 24 00:56:03.730780 containerd[1985]: time="2026-01-24T00:56:03.730732142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fssg2,Uid:c6623f57-8a3f-4d6a-8dba-5fd45c29df38,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea9e68bc19eef1a0d76536d974d487a2ba8d7b846f3dc1333e86e108fe187577\"" Jan 24 00:56:03.735199 containerd[1985]: time="2026-01-24T00:56:03.734810697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:56:03.735650 containerd[1985]: time="2026-01-24T00:56:03.735610901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rwvk2,Uid:8ec96274-9ae9-407e-b53b-b5f4429ba00b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\"" Jan 24 00:56:04.318530 kubelet[2444]: E0124 00:56:04.316900 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:04.486043 kubelet[2444]: E0124 00:56:04.485998 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:04.906549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014864171.mount: Deactivated successfully. Jan 24 00:56:05.313765 containerd[1985]: time="2026-01-24T00:56:05.313602162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:05.314831 containerd[1985]: time="2026-01-24T00:56:05.314674345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:56:05.316498 containerd[1985]: time="2026-01-24T00:56:05.316239375Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:05.317777 kubelet[2444]: E0124 00:56:05.317741 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:05.318959 containerd[1985]: time="2026-01-24T00:56:05.318926994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:05.319841 containerd[1985]: time="2026-01-24T00:56:05.319352773Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.584397385s" Jan 24 00:56:05.319841 containerd[1985]: time="2026-01-24T00:56:05.319387601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:56:05.321572 containerd[1985]: time="2026-01-24T00:56:05.321541927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:56:05.324916 containerd[1985]: time="2026-01-24T00:56:05.324863792Z" level=info msg="CreateContainer within sandbox \"ea9e68bc19eef1a0d76536d974d487a2ba8d7b846f3dc1333e86e108fe187577\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:56:05.340346 containerd[1985]: time="2026-01-24T00:56:05.340279963Z" level=info msg="CreateContainer within sandbox \"ea9e68bc19eef1a0d76536d974d487a2ba8d7b846f3dc1333e86e108fe187577\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9fbfb999e7e82163ce7d5a19667c87e9b1537e63dc08e05879bbe66da3081576\"" Jan 24 00:56:05.341746 containerd[1985]: time="2026-01-24T00:56:05.341708570Z" level=info msg="StartContainer for \"9fbfb999e7e82163ce7d5a19667c87e9b1537e63dc08e05879bbe66da3081576\"" Jan 24 00:56:05.385823 systemd[1]: Started cri-containerd-9fbfb999e7e82163ce7d5a19667c87e9b1537e63dc08e05879bbe66da3081576.scope - libcontainer container 9fbfb999e7e82163ce7d5a19667c87e9b1537e63dc08e05879bbe66da3081576. Jan 24 00:56:05.417861 containerd[1985]: time="2026-01-24T00:56:05.417816855Z" level=info msg="StartContainer for \"9fbfb999e7e82163ce7d5a19667c87e9b1537e63dc08e05879bbe66da3081576\" returns successfully" Jan 24 00:56:05.532749 kubelet[2444]: I0124 00:56:05.532495 2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fssg2" podStartSLOduration=4.94604304 podStartE2EDuration="6.532476408s" podCreationTimestamp="2026-01-24 00:55:59 +0000 UTC" firstStartedPulling="2026-01-24 00:56:03.734011307 +0000 UTC m=+4.876004922" lastFinishedPulling="2026-01-24 00:56:05.320444677 +0000 UTC m=+6.462438290" observedRunningTime="2026-01-24 00:56:05.531901961 +0000 UTC m=+6.673895594" watchObservedRunningTime="2026-01-24 00:56:05.532476408 +0000 UTC m=+6.674470045" Jan 24 00:56:05.534733 kubelet[2444]: E0124 00:56:05.534706 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.534733 kubelet[2444]: W0124 00:56:05.534728 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.534946 kubelet[2444]: E0124 00:56:05.534751 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.535016 kubelet[2444]: E0124 00:56:05.535003 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.535065 kubelet[2444]: W0124 00:56:05.535017 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.535065 kubelet[2444]: E0124 00:56:05.535031 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.535264 kubelet[2444]: E0124 00:56:05.535248 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.535357 kubelet[2444]: W0124 00:56:05.535264 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.535357 kubelet[2444]: E0124 00:56:05.535276 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.535617 kubelet[2444]: E0124 00:56:05.535592 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.535617 kubelet[2444]: W0124 00:56:05.535606 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.535828 kubelet[2444]: E0124 00:56:05.535619 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.535887 kubelet[2444]: E0124 00:56:05.535860 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.535887 kubelet[2444]: W0124 00:56:05.535870 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.535887 kubelet[2444]: E0124 00:56:05.535882 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.536301 kubelet[2444]: E0124 00:56:05.536279 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.536301 kubelet[2444]: W0124 00:56:05.536296 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.536426 kubelet[2444]: E0124 00:56:05.536312 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.536585 kubelet[2444]: E0124 00:56:05.536566 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.536585 kubelet[2444]: W0124 00:56:05.536580 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.536785 kubelet[2444]: E0124 00:56:05.536593 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.536845 kubelet[2444]: E0124 00:56:05.536807 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.536845 kubelet[2444]: W0124 00:56:05.536818 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.536845 kubelet[2444]: E0124 00:56:05.536829 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.537096 kubelet[2444]: E0124 00:56:05.537069 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.537096 kubelet[2444]: W0124 00:56:05.537082 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.537096 kubelet[2444]: E0124 00:56:05.537095 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.537332 kubelet[2444]: E0124 00:56:05.537297 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.537332 kubelet[2444]: W0124 00:56:05.537307 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.537332 kubelet[2444]: E0124 00:56:05.537319 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.537572 kubelet[2444]: E0124 00:56:05.537535 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.537572 kubelet[2444]: W0124 00:56:05.537545 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.537572 kubelet[2444]: E0124 00:56:05.537557 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.537845 kubelet[2444]: E0124 00:56:05.537769 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.537845 kubelet[2444]: W0124 00:56:05.537780 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.537845 kubelet[2444]: E0124 00:56:05.537792 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.538046 kubelet[2444]: E0124 00:56:05.538023 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.538046 kubelet[2444]: W0124 00:56:05.538042 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.538179 kubelet[2444]: E0124 00:56:05.538055 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.538279 kubelet[2444]: E0124 00:56:05.538261 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.538279 kubelet[2444]: W0124 00:56:05.538274 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.538406 kubelet[2444]: E0124 00:56:05.538288 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.538528 kubelet[2444]: E0124 00:56:05.538491 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.538528 kubelet[2444]: W0124 00:56:05.538523 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.538655 kubelet[2444]: E0124 00:56:05.538536 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.538758 kubelet[2444]: E0124 00:56:05.538746 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.538824 kubelet[2444]: W0124 00:56:05.538759 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.538824 kubelet[2444]: E0124 00:56:05.538771 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.539011 kubelet[2444]: E0124 00:56:05.538990 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.539011 kubelet[2444]: W0124 00:56:05.539003 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.539122 kubelet[2444]: E0124 00:56:05.539015 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.539228 kubelet[2444]: E0124 00:56:05.539212 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.539228 kubelet[2444]: W0124 00:56:05.539225 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.539334 kubelet[2444]: E0124 00:56:05.539237 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.539447 kubelet[2444]: E0124 00:56:05.539427 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.539447 kubelet[2444]: W0124 00:56:05.539439 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.539619 kubelet[2444]: E0124 00:56:05.539450 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.539709 kubelet[2444]: E0124 00:56:05.539691 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.539757 kubelet[2444]: W0124 00:56:05.539707 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.539757 kubelet[2444]: E0124 00:56:05.539720 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.540183 kubelet[2444]: E0124 00:56:05.540162 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.540183 kubelet[2444]: W0124 00:56:05.540179 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.540301 kubelet[2444]: E0124 00:56:05.540193 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.540495 kubelet[2444]: E0124 00:56:05.540477 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.540495 kubelet[2444]: W0124 00:56:05.540491 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.540619 kubelet[2444]: E0124 00:56:05.540530 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.540808 kubelet[2444]: E0124 00:56:05.540790 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.540808 kubelet[2444]: W0124 00:56:05.540805 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.541026 kubelet[2444]: E0124 00:56:05.540819 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.541090 kubelet[2444]: E0124 00:56:05.541073 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.541090 kubelet[2444]: W0124 00:56:05.541083 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.541229 kubelet[2444]: E0124 00:56:05.541095 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.541315 kubelet[2444]: E0124 00:56:05.541297 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.541315 kubelet[2444]: W0124 00:56:05.541311 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.541431 kubelet[2444]: E0124 00:56:05.541323 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.541896 kubelet[2444]: E0124 00:56:05.541583 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.541896 kubelet[2444]: W0124 00:56:05.541592 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.541896 kubelet[2444]: E0124 00:56:05.541604 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.541896 kubelet[2444]: E0124 00:56:05.541817 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.541896 kubelet[2444]: W0124 00:56:05.541828 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.541896 kubelet[2444]: E0124 00:56:05.541843 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.542203 kubelet[2444]: E0124 00:56:05.542047 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.542203 kubelet[2444]: W0124 00:56:05.542057 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.542203 kubelet[2444]: E0124 00:56:05.542069 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.542325 kubelet[2444]: E0124 00:56:05.542256 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.542325 kubelet[2444]: W0124 00:56:05.542265 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.542325 kubelet[2444]: E0124 00:56:05.542275 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.542527 kubelet[2444]: E0124 00:56:05.542490 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.542592 kubelet[2444]: W0124 00:56:05.542559 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.542592 kubelet[2444]: E0124 00:56:05.542574 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.542880 kubelet[2444]: E0124 00:56:05.542858 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.542880 kubelet[2444]: W0124 00:56:05.542873 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.542990 kubelet[2444]: E0124 00:56:05.542886 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:05.543235 kubelet[2444]: E0124 00:56:05.543218 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:05.543235 kubelet[2444]: W0124 00:56:05.543233 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:05.543317 kubelet[2444]: E0124 00:56:05.543245 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.318896 kubelet[2444]: E0124 00:56:06.318858 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:06.435139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264217135.mount: Deactivated successfully. Jan 24 00:56:06.485968 kubelet[2444]: E0124 00:56:06.485921 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:06.545570 kubelet[2444]: E0124 00:56:06.545536 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.545570 kubelet[2444]: W0124 00:56:06.545561 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.545570 kubelet[2444]: E0124 00:56:06.545581 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.546073 kubelet[2444]: E0124 00:56:06.545874 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.546073 kubelet[2444]: W0124 00:56:06.545902 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.546073 kubelet[2444]: E0124 00:56:06.545918 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.546193 kubelet[2444]: E0124 00:56:06.546173 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.546193 kubelet[2444]: W0124 00:56:06.546188 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.546244 kubelet[2444]: E0124 00:56:06.546200 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.546566 kubelet[2444]: E0124 00:56:06.546538 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.546566 kubelet[2444]: W0124 00:56:06.546555 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.546566 kubelet[2444]: E0124 00:56:06.546568 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.546834 kubelet[2444]: E0124 00:56:06.546813 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.546834 kubelet[2444]: W0124 00:56:06.546829 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.546916 kubelet[2444]: E0124 00:56:06.546841 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.547044 kubelet[2444]: E0124 00:56:06.547037 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.547044 kubelet[2444]: W0124 00:56:06.547043 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.547092 kubelet[2444]: E0124 00:56:06.547054 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.547234 kubelet[2444]: E0124 00:56:06.547221 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.547234 kubelet[2444]: W0124 00:56:06.547231 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.547291 kubelet[2444]: E0124 00:56:06.547239 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.547443 kubelet[2444]: E0124 00:56:06.547429 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.547443 kubelet[2444]: W0124 00:56:06.547439 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.547500 kubelet[2444]: E0124 00:56:06.547446 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.547700 kubelet[2444]: E0124 00:56:06.547686 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.547700 kubelet[2444]: W0124 00:56:06.547696 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.547704 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.547866 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.548675 kubelet[2444]: W0124 00:56:06.547872 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.547880 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.548171 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.548675 kubelet[2444]: W0124 00:56:06.548183 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.548195 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.548395 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.548675 kubelet[2444]: W0124 00:56:06.548404 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.548675 kubelet[2444]: E0124 00:56:06.548412 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.549037 kubelet[2444]: E0124 00:56:06.549019 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.549037 kubelet[2444]: W0124 00:56:06.549034 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.549242 kubelet[2444]: E0124 00:56:06.549048 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.549304 kubelet[2444]: E0124 00:56:06.549265 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.549304 kubelet[2444]: W0124 00:56:06.549275 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.549304 kubelet[2444]: E0124 00:56:06.549288 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.549609 kubelet[2444]: E0124 00:56:06.549593 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.549609 kubelet[2444]: W0124 00:56:06.549607 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.549827 kubelet[2444]: E0124 00:56:06.549621 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.549884 kubelet[2444]: E0124 00:56:06.549829 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.549884 kubelet[2444]: W0124 00:56:06.549839 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.549884 kubelet[2444]: E0124 00:56:06.549851 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.550141 kubelet[2444]: E0124 00:56:06.550122 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.550141 kubelet[2444]: W0124 00:56:06.550136 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.550270 kubelet[2444]: E0124 00:56:06.550151 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.550637 kubelet[2444]: E0124 00:56:06.550384 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.550637 kubelet[2444]: W0124 00:56:06.550395 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.550637 kubelet[2444]: E0124 00:56:06.550406 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.550784 kubelet[2444]: E0124 00:56:06.550689 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.550784 kubelet[2444]: W0124 00:56:06.550699 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.550784 kubelet[2444]: E0124 00:56:06.550711 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.550934 kubelet[2444]: E0124 00:56:06.550921 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.550934 kubelet[2444]: W0124 00:56:06.550930 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.551011 kubelet[2444]: E0124 00:56:06.550942 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.551247 kubelet[2444]: E0124 00:56:06.551229 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.551247 kubelet[2444]: W0124 00:56:06.551243 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.551352 kubelet[2444]: E0124 00:56:06.551255 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.551605 kubelet[2444]: E0124 00:56:06.551588 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.551605 kubelet[2444]: W0124 00:56:06.551603 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.551617 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.551852 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.552844 kubelet[2444]: W0124 00:56:06.551864 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.551874 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.552218 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.552844 kubelet[2444]: W0124 00:56:06.552227 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.552237 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.552436 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.552844 kubelet[2444]: W0124 00:56:06.552445 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.552844 kubelet[2444]: E0124 00:56:06.552454 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.553387 kubelet[2444]: E0124 00:56:06.552720 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.553387 kubelet[2444]: W0124 00:56:06.552730 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.553387 kubelet[2444]: E0124 00:56:06.552742 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.553387 kubelet[2444]: E0124 00:56:06.553145 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.553387 kubelet[2444]: W0124 00:56:06.553157 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.553387 kubelet[2444]: E0124 00:56:06.553172 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.553771 kubelet[2444]: E0124 00:56:06.553434 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.553771 kubelet[2444]: W0124 00:56:06.553446 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.553771 kubelet[2444]: E0124 00:56:06.553458 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.553771 kubelet[2444]: E0124 00:56:06.553698 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.553771 kubelet[2444]: W0124 00:56:06.553709 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.553771 kubelet[2444]: E0124 00:56:06.553722 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.554060 kubelet[2444]: E0124 00:56:06.553928 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.554060 kubelet[2444]: W0124 00:56:06.553937 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.554060 kubelet[2444]: E0124 00:56:06.553949 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.554207 kubelet[2444]: E0124 00:56:06.554186 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.554207 kubelet[2444]: W0124 00:56:06.554196 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.554279 kubelet[2444]: E0124 00:56:06.554208 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.554673 kubelet[2444]: E0124 00:56:06.554659 2444 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:56:06.554673 kubelet[2444]: W0124 00:56:06.554673 2444 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:56:06.554763 kubelet[2444]: E0124 00:56:06.554685 2444 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:56:06.561414 containerd[1985]: time="2026-01-24T00:56:06.561358009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:06.563460 containerd[1985]: time="2026-01-24T00:56:06.563382482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 24 00:56:06.565802 containerd[1985]: time="2026-01-24T00:56:06.565739745Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:06.569531 containerd[1985]: time="2026-01-24T00:56:06.569397530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:06.571114 containerd[1985]: time="2026-01-24T00:56:06.570165168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.248581135s" Jan 24 00:56:06.571114 containerd[1985]: time="2026-01-24T00:56:06.570204522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:56:06.577264 containerd[1985]: time="2026-01-24T00:56:06.577219566Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:56:06.603214 containerd[1985]: time="2026-01-24T00:56:06.603170183Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f\"" Jan 24 00:56:06.603938 containerd[1985]: time="2026-01-24T00:56:06.603795845Z" level=info msg="StartContainer for \"a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f\"" Jan 24 00:56:06.641744 systemd[1]: Started cri-containerd-a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f.scope - libcontainer container a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f. Jan 24 00:56:06.673344 containerd[1985]: time="2026-01-24T00:56:06.673293598Z" level=info msg="StartContainer for \"a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f\" returns successfully" Jan 24 00:56:06.681059 systemd[1]: cri-containerd-a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f.scope: Deactivated successfully. Jan 24 00:56:06.856814 containerd[1985]: time="2026-01-24T00:56:06.856673700Z" level=info msg="shim disconnected" id=a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f namespace=k8s.io Jan 24 00:56:06.856814 containerd[1985]: time="2026-01-24T00:56:06.856723577Z" level=warning msg="cleaning up after shim disconnected" id=a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f namespace=k8s.io Jan 24 00:56:06.856814 containerd[1985]: time="2026-01-24T00:56:06.856747279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:56:07.319111 kubelet[2444]: E0124 00:56:07.318994 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:07.392359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6200b42cdde2e9afee76c9e61d67f162461605e0c25bf5db97252e747b6293f-rootfs.mount: Deactivated successfully. Jan 24 00:56:07.520572 containerd[1985]: time="2026-01-24T00:56:07.520535882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:56:08.319760 kubelet[2444]: E0124 00:56:08.319693 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:08.485536 kubelet[2444]: E0124 00:56:08.485471 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:09.320626 kubelet[2444]: E0124 00:56:09.320586 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:10.320927 kubelet[2444]: E0124 00:56:10.320866 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:10.371053 containerd[1985]: time="2026-01-24T00:56:10.370987509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:10.373331 containerd[1985]: time="2026-01-24T00:56:10.373262790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:56:10.376149 containerd[1985]: time="2026-01-24T00:56:10.376085392Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:10.379657 containerd[1985]: time="2026-01-24T00:56:10.379598153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:10.380711 containerd[1985]: time="2026-01-24T00:56:10.380673434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.860093888s" Jan 24 00:56:10.380829 containerd[1985]: time="2026-01-24T00:56:10.380716944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:56:10.387195 containerd[1985]: time="2026-01-24T00:56:10.387155893Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:56:10.414198 containerd[1985]: time="2026-01-24T00:56:10.414142004Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c\"" Jan 24 00:56:10.414780 containerd[1985]: time="2026-01-24T00:56:10.414720397Z" level=info msg="StartContainer for \"5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c\"" Jan 24 00:56:10.450803 systemd[1]: Started cri-containerd-5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c.scope - libcontainer container 5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c. Jan 24 00:56:10.485547 kubelet[2444]: E0124 00:56:10.485044 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:10.487735 containerd[1985]: time="2026-01-24T00:56:10.487690313Z" level=info msg="StartContainer for \"5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c\" returns successfully" Jan 24 00:56:11.214175 containerd[1985]: time="2026-01-24T00:56:11.210753504Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:56:11.213937 systemd[1]: cri-containerd-5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c.scope: Deactivated successfully. Jan 24 00:56:11.243330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c-rootfs.mount: Deactivated successfully. Jan 24 00:56:11.303753 kubelet[2444]: I0124 00:56:11.303722 2444 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:56:11.321947 kubelet[2444]: E0124 00:56:11.321904 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:11.358835 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:56:11.837015 containerd[1985]: time="2026-01-24T00:56:11.836956530Z" level=info msg="shim disconnected" id=5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c namespace=k8s.io Jan 24 00:56:11.837015 containerd[1985]: time="2026-01-24T00:56:11.837011342Z" level=warning msg="cleaning up after shim disconnected" id=5c347fb65b972cc04baf315727a67c159974df9a2f3da5691584395b23da638c namespace=k8s.io Jan 24 00:56:11.837015 containerd[1985]: time="2026-01-24T00:56:11.837019850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:56:12.322198 kubelet[2444]: E0124 00:56:12.322069 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:12.491645 systemd[1]: Created slice kubepods-besteffort-pod46fbf850_4138_4783_94c3_ae492c179748.slice - libcontainer container kubepods-besteffort-pod46fbf850_4138_4783_94c3_ae492c179748.slice. Jan 24 00:56:12.506056 containerd[1985]: time="2026-01-24T00:56:12.506008670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2x7tg,Uid:46fbf850-4138-4783-94c3-ae492c179748,Namespace:calico-system,Attempt:0,}" Jan 24 00:56:12.550396 containerd[1985]: time="2026-01-24T00:56:12.549771649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:56:12.599065 containerd[1985]: time="2026-01-24T00:56:12.598932561Z" level=error msg="Failed to destroy network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:12.599346 containerd[1985]: time="2026-01-24T00:56:12.599309592Z" level=error msg="encountered an error cleaning up failed sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:12.599433 containerd[1985]: time="2026-01-24T00:56:12.599382116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2x7tg,Uid:46fbf850-4138-4783-94c3-ae492c179748,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:12.602061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9-shm.mount: Deactivated successfully. Jan 24 00:56:12.602358 kubelet[2444]: E0124 00:56:12.601713 2444 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:12.602358 kubelet[2444]: E0124 00:56:12.602142 2444 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:12.602358 kubelet[2444]: E0124 00:56:12.602171 2444 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2x7tg" Jan 24 00:56:12.603556 kubelet[2444]: E0124 00:56:12.603459 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:13.322833 kubelet[2444]: E0124 00:56:13.322785 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:13.553101 kubelet[2444]: I0124 00:56:13.553071 2444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:13.554151 containerd[1985]: time="2026-01-24T00:56:13.553641912Z" level=info msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" Jan 24 00:56:13.554151 containerd[1985]: time="2026-01-24T00:56:13.553822992Z" level=info msg="Ensure that sandbox b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9 in task-service has been cleanup successfully" Jan 24 00:56:13.616638 containerd[1985]: time="2026-01-24T00:56:13.615897136Z" level=error msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" failed" error="failed to destroy network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:13.616795 kubelet[2444]: E0124 00:56:13.616192 2444 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:13.616795 kubelet[2444]: E0124 00:56:13.616254 2444 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9"} Jan 24 00:56:13.616795 kubelet[2444]: E0124 00:56:13.616324 2444 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46fbf850-4138-4783-94c3-ae492c179748\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:13.616795 kubelet[2444]: E0124 00:56:13.616363 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46fbf850-4138-4783-94c3-ae492c179748\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:14.323563 kubelet[2444]: E0124 00:56:14.323484 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:15.324474 kubelet[2444]: E0124 00:56:15.324407 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:16.325826 kubelet[2444]: E0124 00:56:16.325788 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:17.171380 systemd[1]: Created slice kubepods-besteffort-pod9fc6456f_b5ce_4052_90a7_2a6ce48af167.slice - libcontainer container kubepods-besteffort-pod9fc6456f_b5ce_4052_90a7_2a6ce48af167.slice. Jan 24 00:56:17.326710 kubelet[2444]: E0124 00:56:17.326581 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:17.332771 kubelet[2444]: I0124 00:56:17.332581 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9rm\" (UniqueName: \"kubernetes.io/projected/9fc6456f-b5ce-4052-90a7-2a6ce48af167-kube-api-access-vn9rm\") pod \"nginx-deployment-bb8f74bfb-p9vgh\" (UID: \"9fc6456f-b5ce-4052-90a7-2a6ce48af167\") " pod="default/nginx-deployment-bb8f74bfb-p9vgh" Jan 24 00:56:17.480857 containerd[1985]: time="2026-01-24T00:56:17.480739875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-p9vgh,Uid:9fc6456f-b5ce-4052-90a7-2a6ce48af167,Namespace:default,Attempt:0,}" Jan 24 00:56:17.639181 containerd[1985]: time="2026-01-24T00:56:17.638885422Z" level=error msg="Failed to destroy network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:17.640196 containerd[1985]: time="2026-01-24T00:56:17.640140606Z" level=error msg="encountered an error cleaning up failed sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:17.640305 containerd[1985]: time="2026-01-24T00:56:17.640222830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-p9vgh,Uid:9fc6456f-b5ce-4052-90a7-2a6ce48af167,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:17.642653 kubelet[2444]: E0124 00:56:17.641738 2444 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:17.642653 kubelet[2444]: E0124 00:56:17.641941 2444 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-p9vgh" Jan 24 00:56:17.642653 kubelet[2444]: E0124 00:56:17.642046 2444 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-bb8f74bfb-p9vgh" Jan 24 00:56:17.643427 kubelet[2444]: E0124 00:56:17.643268 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-bb8f74bfb-p9vgh_default(9fc6456f-b5ce-4052-90a7-2a6ce48af167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-bb8f74bfb-p9vgh_default(9fc6456f-b5ce-4052-90a7-2a6ce48af167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-p9vgh" podUID="9fc6456f-b5ce-4052-90a7-2a6ce48af167" Jan 24 00:56:17.643867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386-shm.mount: Deactivated successfully. Jan 24 00:56:18.201295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703599380.mount: Deactivated successfully. Jan 24 00:56:18.239941 containerd[1985]: time="2026-01-24T00:56:18.239888953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:18.241941 containerd[1985]: time="2026-01-24T00:56:18.241885191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:56:18.244268 containerd[1985]: time="2026-01-24T00:56:18.244198076Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:18.247362 containerd[1985]: time="2026-01-24T00:56:18.247301253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:18.248358 containerd[1985]: time="2026-01-24T00:56:18.247953890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.698130855s" Jan 24 00:56:18.248358 containerd[1985]: time="2026-01-24T00:56:18.247994696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:56:18.260921 containerd[1985]: time="2026-01-24T00:56:18.260806921Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:56:18.288500 containerd[1985]: time="2026-01-24T00:56:18.288426471Z" level=info msg="CreateContainer within sandbox \"d73aee3d3ea494412703ee24ebfb3a838cf34daf2fa69ffcec0f4b5fb9f44174\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304\"" Jan 24 00:56:18.289202 containerd[1985]: time="2026-01-24T00:56:18.289177696Z" level=info msg="StartContainer for \"8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304\"" Jan 24 00:56:18.326934 kubelet[2444]: E0124 00:56:18.326877 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:18.364780 systemd[1]: Started cri-containerd-8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304.scope - libcontainer container 8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304. Jan 24 00:56:18.407553 containerd[1985]: time="2026-01-24T00:56:18.406794404Z" level=info msg="StartContainer for \"8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304\" returns successfully" Jan 24 00:56:18.570522 kubelet[2444]: I0124 00:56:18.570458 2444 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:18.571272 containerd[1985]: time="2026-01-24T00:56:18.570866401Z" level=info msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" Jan 24 00:56:18.571272 containerd[1985]: time="2026-01-24T00:56:18.571004471Z" level=info msg="Ensure that sandbox 135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386 in task-service has been cleanup successfully" Jan 24 00:56:18.590881 systemd[1]: run-containerd-runc-k8s.io-8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304-runc.yLfVGJ.mount: Deactivated successfully. Jan 24 00:56:18.596552 kubelet[2444]: I0124 00:56:18.596477 2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rwvk2" podStartSLOduration=5.084051533 podStartE2EDuration="19.59646053s" podCreationTimestamp="2026-01-24 00:55:59 +0000 UTC" firstStartedPulling="2026-01-24 00:56:03.736818632 +0000 UTC m=+4.878812248" lastFinishedPulling="2026-01-24 00:56:18.249227612 +0000 UTC m=+19.391221245" observedRunningTime="2026-01-24 00:56:18.59581835 +0000 UTC m=+19.737811986" watchObservedRunningTime="2026-01-24 00:56:18.59646053 +0000 UTC m=+19.738454166" Jan 24 00:56:18.638834 containerd[1985]: time="2026-01-24T00:56:18.638726700Z" level=error msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" failed" error="failed to destroy network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:56:18.639109 kubelet[2444]: E0124 00:56:18.639069 2444 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:18.639209 kubelet[2444]: E0124 00:56:18.639124 2444 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386"} Jan 24 00:56:18.639209 kubelet[2444]: E0124 00:56:18.639164 2444 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fc6456f-b5ce-4052-90a7-2a6ce48af167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:56:18.639331 kubelet[2444]: E0124 00:56:18.639203 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fc6456f-b5ce-4052-90a7-2a6ce48af167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-bb8f74bfb-p9vgh" podUID="9fc6456f-b5ce-4052-90a7-2a6ce48af167" Jan 24 00:56:18.697364 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:56:18.697476 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:56:19.314284 kubelet[2444]: E0124 00:56:19.314228 2444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:19.327891 kubelet[2444]: E0124 00:56:19.327838 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:20.328267 kubelet[2444]: E0124 00:56:20.328224 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:20.373536 kernel: bpftool[3286]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:56:20.592162 (udev-worker)[3085]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:56:20.593763 systemd-networkd[1619]: vxlan.calico: Link UP Jan 24 00:56:20.593768 systemd-networkd[1619]: vxlan.calico: Gained carrier Jan 24 00:56:20.621005 (udev-worker)[3312]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:56:21.328913 kubelet[2444]: E0124 00:56:21.328793 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:22.329453 kubelet[2444]: E0124 00:56:22.329411 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:22.460162 systemd-networkd[1619]: vxlan.calico: Gained IPv6LL Jan 24 00:56:23.330049 kubelet[2444]: E0124 00:56:23.329998 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:24.331140 kubelet[2444]: E0124 00:56:24.331060 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:24.990480 ntpd[1962]: Listen normally on 7 vxlan.calico 192.168.65.128:123 Jan 24 00:56:24.990583 ntpd[1962]: Listen normally on 8 vxlan.calico [fe80::6497:1cff:fe38:1c8%3]:123 Jan 24 00:56:24.990925 ntpd[1962]: 24 Jan 00:56:24 ntpd[1962]: Listen normally on 7 vxlan.calico 192.168.65.128:123 Jan 24 00:56:24.990925 ntpd[1962]: 24 Jan 00:56:24 ntpd[1962]: Listen normally on 8 vxlan.calico [fe80::6497:1cff:fe38:1c8%3]:123 Jan 24 00:56:25.332353 kubelet[2444]: E0124 00:56:25.332231 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:25.486785 containerd[1985]: time="2026-01-24T00:56:25.486418961Z" level=info msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.587 [INFO][3365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.588 [INFO][3365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" iface="eth0" netns="/var/run/netns/cni-b1d4607c-150d-32d6-7271-f3e348208ce6" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.589 [INFO][3365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" iface="eth0" netns="/var/run/netns/cni-b1d4607c-150d-32d6-7271-f3e348208ce6" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.590 [INFO][3365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" iface="eth0" netns="/var/run/netns/cni-b1d4607c-150d-32d6-7271-f3e348208ce6" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.591 [INFO][3365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.591 [INFO][3365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.659 [INFO][3372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.659 [INFO][3372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.659 [INFO][3372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.670 [WARNING][3372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.670 [INFO][3372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.673 [INFO][3372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:25.677278 containerd[1985]: 2026-01-24 00:56:25.675 [INFO][3365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:25.680635 containerd[1985]: time="2026-01-24T00:56:25.680593522Z" level=info msg="TearDown network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" successfully" Jan 24 00:56:25.680635 containerd[1985]: time="2026-01-24T00:56:25.680627731Z" level=info msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" returns successfully" Jan 24 00:56:25.681081 systemd[1]: run-netns-cni\x2db1d4607c\x2d150d\x2d32d6\x2d7271\x2df3e348208ce6.mount: Deactivated successfully. Jan 24 00:56:25.685554 containerd[1985]: time="2026-01-24T00:56:25.685497609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2x7tg,Uid:46fbf850-4138-4783-94c3-ae492c179748,Namespace:calico-system,Attempt:1,}" Jan 24 00:56:25.839254 systemd-networkd[1619]: cali71476208869: Link UP Jan 24 00:56:25.840307 systemd-networkd[1619]: cali71476208869: Gained carrier Jan 24 00:56:25.841475 (udev-worker)[3398]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.747 [INFO][3380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.66-k8s-csi--node--driver--2x7tg-eth0 csi-node-driver- calico-system 46fbf850-4138-4783-94c3-ae492c179748 1288 0 2026-01-24 00:55:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.30.66 csi-node-driver-2x7tg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali71476208869 [] [] }} ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.747 [INFO][3380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.780 [INFO][3391] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" HandleID="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.780 [INFO][3391] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" HandleID="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048efa0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.30.66", "pod":"csi-node-driver-2x7tg", "timestamp":"2026-01-24 00:56:25.780053606 +0000 UTC"}, Hostname:"172.31.30.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.780 [INFO][3391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.780 [INFO][3391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.780 [INFO][3391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.66' Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.789 [INFO][3391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.799 [INFO][3391] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.806 [INFO][3391] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.808 [INFO][3391] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.812 [INFO][3391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.812 [INFO][3391] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.814 [INFO][3391] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.821 [INFO][3391] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.832 [INFO][3391] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.129/26] block=192.168.65.128/26 handle="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.832 [INFO][3391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.129/26] handle="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" host="172.31.30.66" Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.832 [INFO][3391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:25.860057 containerd[1985]: 2026-01-24 00:56:25.832 [INFO][3391] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.129/26] IPv6=[] ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" HandleID="k8s-pod-network.b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.834 [INFO][3380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-csi--node--driver--2x7tg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46fbf850-4138-4783-94c3-ae492c179748", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"", Pod:"csi-node-driver-2x7tg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71476208869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.834 [INFO][3380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.129/32] ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.834 [INFO][3380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71476208869 ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.841 [INFO][3380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.841 [INFO][3380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-csi--node--driver--2x7tg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46fbf850-4138-4783-94c3-ae492c179748", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a", Pod:"csi-node-driver-2x7tg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71476208869", MAC:"d6:39:77:fb:15:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:25.861271 containerd[1985]: 2026-01-24 00:56:25.856 [INFO][3380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a" Namespace="calico-system" Pod="csi-node-driver-2x7tg" WorkloadEndpoint="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:25.889747 containerd[1985]: time="2026-01-24T00:56:25.889426830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:25.889747 containerd[1985]: time="2026-01-24T00:56:25.889497000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:25.889919 containerd[1985]: time="2026-01-24T00:56:25.889761366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:25.890592 containerd[1985]: time="2026-01-24T00:56:25.890032724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:25.923778 systemd[1]: Started cri-containerd-b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a.scope - libcontainer container b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a. Jan 24 00:56:25.961419 containerd[1985]: time="2026-01-24T00:56:25.961388366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2x7tg,Uid:46fbf850-4138-4783-94c3-ae492c179748,Namespace:calico-system,Attempt:1,} returns sandbox id \"b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a\"" Jan 24 00:56:25.964032 containerd[1985]: time="2026-01-24T00:56:25.963771422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:56:26.058742 update_engine[1968]: I20260124 00:56:26.058661 1968 update_attempter.cc:509] Updating boot flags... Jan 24 00:56:26.123642 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3470) Jan 24 00:56:26.225751 containerd[1985]: time="2026-01-24T00:56:26.222001967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:56:26.225751 containerd[1985]: time="2026-01-24T00:56:26.225332683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:56:26.225751 containerd[1985]: time="2026-01-24T00:56:26.225466155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:56:26.225974 kubelet[2444]: E0124 00:56:26.225720 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:26.225974 kubelet[2444]: E0124 00:56:26.225771 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:26.225974 kubelet[2444]: E0124 00:56:26.225873 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:26.229566 containerd[1985]: time="2026-01-24T00:56:26.228175655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:56:26.333049 kubelet[2444]: E0124 00:56:26.333007 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:26.486229 containerd[1985]: time="2026-01-24T00:56:26.485993884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:56:26.489018 containerd[1985]: time="2026-01-24T00:56:26.488864599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:56:26.489018 containerd[1985]: time="2026-01-24T00:56:26.488907953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:56:26.489400 kubelet[2444]: E0124 00:56:26.489169 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:56:26.489400 kubelet[2444]: E0124 00:56:26.489210 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:56:26.489400 kubelet[2444]: E0124 00:56:26.489281 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:26.489631 kubelet[2444]: E0124 00:56:26.489356 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:26.588657 kubelet[2444]: E0124 00:56:26.588584 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:27.323704 systemd-networkd[1619]: cali71476208869: Gained IPv6LL Jan 24 00:56:27.333888 kubelet[2444]: E0124 00:56:27.333829 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:27.590087 kubelet[2444]: E0124 00:56:27.589955 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:28.334141 kubelet[2444]: E0124 00:56:28.334086 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:29.334651 kubelet[2444]: E0124 00:56:29.334574 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:29.990490 ntpd[1962]: Listen normally on 9 cali71476208869 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:56:29.990949 ntpd[1962]: 24 Jan 00:56:29 ntpd[1962]: Listen normally on 9 cali71476208869 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:56:30.335305 kubelet[2444]: E0124 00:56:30.335157 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:31.335766 kubelet[2444]: E0124 00:56:31.335692 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:32.336405 kubelet[2444]: E0124 00:56:32.336353 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:33.337449 kubelet[2444]: E0124 00:56:33.337393 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:33.486936 containerd[1985]: time="2026-01-24T00:56:33.486639359Z" level=info msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.535 [INFO][3570] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.535 [INFO][3570] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" iface="eth0" netns="/var/run/netns/cni-1be33f78-7ede-31e8-3227-d16f4036feef" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.536 [INFO][3570] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" iface="eth0" netns="/var/run/netns/cni-1be33f78-7ede-31e8-3227-d16f4036feef" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.536 [INFO][3570] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" iface="eth0" netns="/var/run/netns/cni-1be33f78-7ede-31e8-3227-d16f4036feef" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.536 [INFO][3570] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.536 [INFO][3570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.558 [INFO][3577] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.558 [INFO][3577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.558 [INFO][3577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.565 [WARNING][3577] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.565 [INFO][3577] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.567 [INFO][3577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:33.570969 containerd[1985]: 2026-01-24 00:56:33.569 [INFO][3570] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:33.573236 containerd[1985]: time="2026-01-24T00:56:33.572609612Z" level=info msg="TearDown network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" successfully" Jan 24 00:56:33.573236 containerd[1985]: time="2026-01-24T00:56:33.572644682Z" level=info msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" returns successfully" Jan 24 00:56:33.573478 systemd[1]: run-netns-cni\x2d1be33f78\x2d7ede\x2d31e8\x2d3227\x2dd16f4036feef.mount: Deactivated successfully. Jan 24 00:56:33.579398 containerd[1985]: time="2026-01-24T00:56:33.579322412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-p9vgh,Uid:9fc6456f-b5ce-4052-90a7-2a6ce48af167,Namespace:default,Attempt:1,}" Jan 24 00:56:33.733012 systemd-networkd[1619]: cali25df0ca93a3: Link UP Jan 24 00:56:33.734614 (udev-worker)[3603]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:56:33.734788 systemd-networkd[1619]: cali25df0ca93a3: Gained carrier Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.654 [INFO][3584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0 nginx-deployment-bb8f74bfb- default 9fc6456f-b5ce-4052-90a7-2a6ce48af167 1345 0 2026-01-24 00:56:17 +0000 UTC map[app:nginx pod-template-hash:bb8f74bfb projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.66 nginx-deployment-bb8f74bfb-p9vgh eth0 default [] [] [kns.default ksa.default.default] cali25df0ca93a3 [] [] }} ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.654 [INFO][3584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.684 [INFO][3596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" HandleID="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.684 [INFO][3596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" HandleID="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7b0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.66", "pod":"nginx-deployment-bb8f74bfb-p9vgh", "timestamp":"2026-01-24 00:56:33.68403811 +0000 UTC"}, Hostname:"172.31.30.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.684 [INFO][3596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.684 [INFO][3596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.684 [INFO][3596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.66' Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.693 [INFO][3596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.698 [INFO][3596] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.705 [INFO][3596] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.707 [INFO][3596] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.710 [INFO][3596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.710 [INFO][3596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.713 [INFO][3596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.721 [INFO][3596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.727 [INFO][3596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.130/26] block=192.168.65.128/26 handle="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.727 [INFO][3596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.130/26] handle="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" host="172.31.30.66" Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.727 [INFO][3596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:33.748860 containerd[1985]: 2026-01-24 00:56:33.727 [INFO][3596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.130/26] IPv6=[] ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" HandleID="k8s-pod-network.c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.729 [INFO][3584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"9fc6456f-b5ce-4052-90a7-2a6ce48af167", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"", Pod:"nginx-deployment-bb8f74bfb-p9vgh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali25df0ca93a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.729 [INFO][3584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.130/32] ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.729 [INFO][3584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25df0ca93a3 ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.734 [INFO][3584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.736 [INFO][3584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"9fc6456f-b5ce-4052-90a7-2a6ce48af167", ResourceVersion:"1345", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad", Pod:"nginx-deployment-bb8f74bfb-p9vgh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali25df0ca93a3", MAC:"7e:73:81:51:f4:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:33.750387 containerd[1985]: 2026-01-24 00:56:33.745 [INFO][3584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad" Namespace="default" Pod="nginx-deployment-bb8f74bfb-p9vgh" WorkloadEndpoint="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:33.776740 containerd[1985]: time="2026-01-24T00:56:33.776622705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:33.776740 containerd[1985]: time="2026-01-24T00:56:33.776695244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:33.776740 containerd[1985]: time="2026-01-24T00:56:33.776719310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:33.777919 containerd[1985]: time="2026-01-24T00:56:33.777850973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:33.811754 systemd[1]: Started cri-containerd-c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad.scope - libcontainer container c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad. Jan 24 00:56:33.854630 containerd[1985]: time="2026-01-24T00:56:33.854561749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-p9vgh,Uid:9fc6456f-b5ce-4052-90a7-2a6ce48af167,Namespace:default,Attempt:1,} returns sandbox id \"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad\"" Jan 24 00:56:33.856772 containerd[1985]: time="2026-01-24T00:56:33.856740905Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:56:34.338200 kubelet[2444]: E0124 00:56:34.338159 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:34.876291 systemd-networkd[1619]: cali25df0ca93a3: Gained IPv6LL Jan 24 00:56:35.339246 kubelet[2444]: E0124 00:56:35.339154 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:36.339305 kubelet[2444]: E0124 00:56:36.339267 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:36.643132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176584741.mount: Deactivated successfully. Jan 24 00:56:36.990524 ntpd[1962]: Listen normally on 10 cali25df0ca93a3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:56:36.990916 ntpd[1962]: 24 Jan 00:56:36 ntpd[1962]: Listen normally on 10 cali25df0ca93a3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:56:37.341142 kubelet[2444]: E0124 00:56:37.341024 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:37.741870 containerd[1985]: time="2026-01-24T00:56:37.741817271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.743871 containerd[1985]: time="2026-01-24T00:56:37.743814547Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63836480" Jan 24 00:56:37.745894 containerd[1985]: time="2026-01-24T00:56:37.745833616Z" level=info msg="ImageCreate event name:\"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.750582 containerd[1985]: time="2026-01-24T00:56:37.749583142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:37.750582 containerd[1985]: time="2026-01-24T00:56:37.750440981Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 3.893666616s" Jan 24 00:56:37.750582 containerd[1985]: time="2026-01-24T00:56:37.750471765Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:56:37.757628 containerd[1985]: time="2026-01-24T00:56:37.757577558Z" level=info msg="CreateContainer within sandbox \"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 24 00:56:37.779766 containerd[1985]: time="2026-01-24T00:56:37.779714783Z" level=info msg="CreateContainer within sandbox \"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"165cb37a3b2a1726550ae894a7f5343312259f2b867b821c277abe30616cc8dc\"" Jan 24 00:56:37.781611 containerd[1985]: time="2026-01-24T00:56:37.780789375Z" level=info msg="StartContainer for \"165cb37a3b2a1726550ae894a7f5343312259f2b867b821c277abe30616cc8dc\"" Jan 24 00:56:37.813781 systemd[1]: Started cri-containerd-165cb37a3b2a1726550ae894a7f5343312259f2b867b821c277abe30616cc8dc.scope - libcontainer container 165cb37a3b2a1726550ae894a7f5343312259f2b867b821c277abe30616cc8dc. Jan 24 00:56:37.843336 containerd[1985]: time="2026-01-24T00:56:37.843210724Z" level=info msg="StartContainer for \"165cb37a3b2a1726550ae894a7f5343312259f2b867b821c277abe30616cc8dc\" returns successfully" Jan 24 00:56:38.341938 kubelet[2444]: E0124 00:56:38.341879 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:38.625698 kubelet[2444]: I0124 00:56:38.625543 2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-p9vgh" podStartSLOduration=17.729496749 podStartE2EDuration="21.625502396s" podCreationTimestamp="2026-01-24 00:56:17 +0000 UTC" firstStartedPulling="2026-01-24 00:56:33.855858871 +0000 UTC m=+34.997852486" lastFinishedPulling="2026-01-24 00:56:37.75186452 +0000 UTC m=+38.893858133" observedRunningTime="2026-01-24 00:56:38.625363832 +0000 UTC m=+39.767357468" watchObservedRunningTime="2026-01-24 00:56:38.625502396 +0000 UTC m=+39.767496029" Jan 24 00:56:39.314269 kubelet[2444]: E0124 00:56:39.314210 2444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:39.342900 kubelet[2444]: E0124 00:56:39.342829 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:40.343957 kubelet[2444]: E0124 00:56:40.343861 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:40.486628 containerd[1985]: time="2026-01-24T00:56:40.486454575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:56:40.733827 containerd[1985]: time="2026-01-24T00:56:40.733776542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:56:40.735976 containerd[1985]: time="2026-01-24T00:56:40.735919975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:56:40.736653 containerd[1985]: time="2026-01-24T00:56:40.736006376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:56:40.736728 kubelet[2444]: E0124 00:56:40.736283 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:40.736728 kubelet[2444]: E0124 00:56:40.736332 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:56:40.736728 kubelet[2444]: E0124 00:56:40.736424 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:40.737581 containerd[1985]: time="2026-01-24T00:56:40.737534569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:56:41.020645 containerd[1985]: time="2026-01-24T00:56:41.020457162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:56:41.022655 containerd[1985]: time="2026-01-24T00:56:41.022590935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:56:41.022796 containerd[1985]: time="2026-01-24T00:56:41.022690388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:56:41.022921 kubelet[2444]: E0124 00:56:41.022874 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:56:41.023003 kubelet[2444]: E0124 00:56:41.022924 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:56:41.023059 kubelet[2444]: E0124 00:56:41.023014 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:56:41.023157 kubelet[2444]: E0124 00:56:41.023073 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:41.344533 kubelet[2444]: E0124 00:56:41.344376 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:42.345014 kubelet[2444]: E0124 00:56:42.344932 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:43.345820 kubelet[2444]: E0124 00:56:43.345735 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:44.346825 kubelet[2444]: E0124 00:56:44.346779 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:45.347985 kubelet[2444]: E0124 00:56:45.347946 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:45.913732 systemd[1]: Created slice kubepods-besteffort-pod6cb6dbe4_423e_4a89_905d_7c25e711e178.slice - libcontainer container kubepods-besteffort-pod6cb6dbe4_423e_4a89_905d_7c25e711e178.slice. Jan 24 00:56:45.922240 kubelet[2444]: I0124 00:56:45.922107 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhs6\" (UniqueName: \"kubernetes.io/projected/6cb6dbe4-423e-4a89-905d-7c25e711e178-kube-api-access-5fhs6\") pod \"nfs-server-provisioner-0\" (UID: \"6cb6dbe4-423e-4a89-905d-7c25e711e178\") " pod="default/nfs-server-provisioner-0" Jan 24 00:56:45.922240 kubelet[2444]: I0124 00:56:45.922154 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6cb6dbe4-423e-4a89-905d-7c25e711e178-data\") pod \"nfs-server-provisioner-0\" (UID: \"6cb6dbe4-423e-4a89-905d-7c25e711e178\") " pod="default/nfs-server-provisioner-0" Jan 24 00:56:46.222152 containerd[1985]: time="2026-01-24T00:56:46.222108366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6cb6dbe4-423e-4a89-905d-7c25e711e178,Namespace:default,Attempt:0,}" Jan 24 00:56:46.348766 kubelet[2444]: E0124 00:56:46.348718 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:46.382147 systemd-networkd[1619]: cali60e51b789ff: Link UP Jan 24 00:56:46.383631 systemd-networkd[1619]: cali60e51b789ff: Gained carrier Jan 24 00:56:46.384080 (udev-worker)[3775]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.289 [INFO][3756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.66-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6cb6dbe4-423e-4a89-905d-7c25e711e178 1424 0 2026-01-24 00:56:45 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-7c9b4c458c heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.30.66 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.290 [INFO][3756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.323 [INFO][3767] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" HandleID="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Workload="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.323 [INFO][3767] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" HandleID="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Workload="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.66", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-24 00:56:46.323591671 +0000 UTC"}, Hostname:"172.31.30.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.323 [INFO][3767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.323 [INFO][3767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.323 [INFO][3767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.66' Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.332 [INFO][3767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.338 [INFO][3767] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.347 [INFO][3767] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.350 [INFO][3767] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.353 [INFO][3767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.353 [INFO][3767] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.356 [INFO][3767] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38 Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.366 [INFO][3767] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.374 [INFO][3767] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.131/26] block=192.168.65.128/26 handle="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.374 [INFO][3767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.131/26] handle="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" host="172.31.30.66" Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.374 [INFO][3767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:46.399206 containerd[1985]: 2026-01-24 00:56:46.374 [INFO][3767] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.131/26] IPv6=[] ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" HandleID="k8s-pod-network.d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Workload="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.399900 containerd[1985]: 2026-01-24 00:56:46.376 [INFO][3756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6cb6dbe4-423e-4a89-905d-7c25e711e178", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:46.399900 containerd[1985]: 2026-01-24 00:56:46.376 [INFO][3756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.131/32] ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.399900 containerd[1985]: 2026-01-24 00:56:46.377 [INFO][3756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.399900 containerd[1985]: 2026-01-24 00:56:46.385 [INFO][3756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.400065 containerd[1985]: 2026-01-24 00:56:46.385 [INFO][3756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6cb6dbe4-423e-4a89-905d-7c25e711e178", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-7c9b4c458c", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"5a:f9:51:16:88:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:46.400065 containerd[1985]: 2026-01-24 00:56:46.397 [INFO][3756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.66-k8s-nfs--server--provisioner--0-eth0" Jan 24 00:56:46.431277 containerd[1985]: time="2026-01-24T00:56:46.431162415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:56:46.431277 containerd[1985]: time="2026-01-24T00:56:46.431218579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:56:46.431277 containerd[1985]: time="2026-01-24T00:56:46.431233081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:46.433288 containerd[1985]: time="2026-01-24T00:56:46.431333924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:56:46.462739 systemd[1]: Started cri-containerd-d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38.scope - libcontainer container d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38. Jan 24 00:56:46.509502 containerd[1985]: time="2026-01-24T00:56:46.509408456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6cb6dbe4-423e-4a89-905d-7c25e711e178,Namespace:default,Attempt:0,} returns sandbox id \"d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38\"" Jan 24 00:56:46.511839 containerd[1985]: time="2026-01-24T00:56:46.511616827Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 24 00:56:47.350249 kubelet[2444]: E0124 00:56:47.349833 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:48.350524 kubelet[2444]: E0124 00:56:48.350464 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:48.443858 systemd-networkd[1619]: cali60e51b789ff: Gained IPv6LL Jan 24 00:56:49.295297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777580884.mount: Deactivated successfully. Jan 24 00:56:49.351613 kubelet[2444]: E0124 00:56:49.351570 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:50.352907 kubelet[2444]: E0124 00:56:50.352664 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:50.990486 ntpd[1962]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:56:50.991102 ntpd[1962]: 24 Jan 00:56:50 ntpd[1962]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:56:51.353907 kubelet[2444]: E0124 00:56:51.353583 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:51.583130 containerd[1985]: time="2026-01-24T00:56:51.583075340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.585336 containerd[1985]: time="2026-01-24T00:56:51.585282887Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 24 00:56:51.588529 containerd[1985]: time="2026-01-24T00:56:51.587159366Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.590983 containerd[1985]: time="2026-01-24T00:56:51.590940524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:56:51.592107 containerd[1985]: time="2026-01-24T00:56:51.592063805Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.080411357s" Jan 24 00:56:51.592313 containerd[1985]: time="2026-01-24T00:56:51.592113954Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 24 00:56:51.598464 containerd[1985]: time="2026-01-24T00:56:51.598426452Z" level=info msg="CreateContainer within sandbox \"d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 24 00:56:51.626147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098274858.mount: Deactivated successfully. Jan 24 00:56:51.632980 containerd[1985]: time="2026-01-24T00:56:51.632930908Z" level=info msg="CreateContainer within sandbox \"d4342b9f5ca8794f9107e7e37e2968b99b2f0b2657c961454b67f40ad9d20e38\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8e579fde75e7c07add1fd268eaab7e069c60dceec07cf04a328f7decf0611a52\"" Jan 24 00:56:51.633737 containerd[1985]: time="2026-01-24T00:56:51.633692377Z" level=info msg="StartContainer for \"8e579fde75e7c07add1fd268eaab7e069c60dceec07cf04a328f7decf0611a52\"" Jan 24 00:56:51.688747 systemd[1]: Started cri-containerd-8e579fde75e7c07add1fd268eaab7e069c60dceec07cf04a328f7decf0611a52.scope - libcontainer container 8e579fde75e7c07add1fd268eaab7e069c60dceec07cf04a328f7decf0611a52. Jan 24 00:56:51.719857 containerd[1985]: time="2026-01-24T00:56:51.719800381Z" level=info msg="StartContainer for \"8e579fde75e7c07add1fd268eaab7e069c60dceec07cf04a328f7decf0611a52\" returns successfully" Jan 24 00:56:52.355006 kubelet[2444]: E0124 00:56:52.354953 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:52.487100 kubelet[2444]: E0124 00:56:52.487054 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:56:52.669052 kubelet[2444]: I0124 00:56:52.668891 2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.586658016 podStartE2EDuration="7.668874177s" podCreationTimestamp="2026-01-24 00:56:45 +0000 UTC" firstStartedPulling="2026-01-24 00:56:46.511151178 +0000 UTC m=+47.653144805" lastFinishedPulling="2026-01-24 00:56:51.59336735 +0000 UTC m=+52.735360966" observedRunningTime="2026-01-24 00:56:52.668666423 +0000 UTC m=+53.810660058" watchObservedRunningTime="2026-01-24 00:56:52.668874177 +0000 UTC m=+53.810867811" Jan 24 00:56:53.355929 kubelet[2444]: E0124 00:56:53.355864 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:54.357103 kubelet[2444]: E0124 00:56:54.357051 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:55.358044 kubelet[2444]: E0124 00:56:55.357981 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:56.358531 kubelet[2444]: E0124 00:56:56.358472 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:57.358908 kubelet[2444]: E0124 00:56:57.358856 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:58.359829 kubelet[2444]: E0124 00:56:58.359777 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:59.314579 kubelet[2444]: E0124 00:56:59.314500 2444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:59.341721 containerd[1985]: time="2026-01-24T00:56:59.341673092Z" level=info msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" Jan 24 00:56:59.360678 kubelet[2444]: E0124 00:56:59.360499 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.393 [WARNING][3948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-csi--node--driver--2x7tg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46fbf850-4138-4783-94c3-ae492c179748", ResourceVersion:"1461", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a", Pod:"csi-node-driver-2x7tg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71476208869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.393 [INFO][3948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.393 [INFO][3948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" iface="eth0" netns="" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.393 [INFO][3948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.393 [INFO][3948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.439 [INFO][3955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.439 [INFO][3955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.439 [INFO][3955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.448 [WARNING][3955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.448 [INFO][3955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.450 [INFO][3955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:59.453902 containerd[1985]: 2026-01-24 00:56:59.452 [INFO][3948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.454410 containerd[1985]: time="2026-01-24T00:56:59.453951943Z" level=info msg="TearDown network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" successfully" Jan 24 00:56:59.454410 containerd[1985]: time="2026-01-24T00:56:59.453976486Z" level=info msg="StopPodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" returns successfully" Jan 24 00:56:59.461740 containerd[1985]: time="2026-01-24T00:56:59.461679688Z" level=info msg="RemovePodSandbox for \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" Jan 24 00:56:59.461740 containerd[1985]: time="2026-01-24T00:56:59.461715790Z" level=info msg="Forcibly stopping sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\"" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.514 [WARNING][3969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-csi--node--driver--2x7tg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46fbf850-4138-4783-94c3-ae492c179748", ResourceVersion:"1461", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"b193c6a456057a548b7fac1cb9ca09bdeef63d8884e1fd95ffad8a4d9f18a14a", Pod:"csi-node-driver-2x7tg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71476208869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.514 [INFO][3969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.514 [INFO][3969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" iface="eth0" netns="" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.514 [INFO][3969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.514 [INFO][3969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.544 [INFO][3980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.544 [INFO][3980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.544 [INFO][3980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.553 [WARNING][3980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.553 [INFO][3980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" HandleID="k8s-pod-network.b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Workload="172.31.30.66-k8s-csi--node--driver--2x7tg-eth0" Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.557 [INFO][3980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:59.559735 containerd[1985]: 2026-01-24 00:56:59.558 [INFO][3969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9" Jan 24 00:56:59.559735 containerd[1985]: time="2026-01-24T00:56:59.559729582Z" level=info msg="TearDown network for sandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" successfully" Jan 24 00:56:59.592589 containerd[1985]: time="2026-01-24T00:56:59.592431773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:56:59.592589 containerd[1985]: time="2026-01-24T00:56:59.592531199Z" level=info msg="RemovePodSandbox \"b002355451aefb59672237a3fa0b0ac21a03a7fdbe9c92bd18a2bdf2992872d9\" returns successfully" Jan 24 00:56:59.593955 containerd[1985]: time="2026-01-24T00:56:59.593913196Z" level=info msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.645 [WARNING][3994] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"9fc6456f-b5ce-4052-90a7-2a6ce48af167", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad", Pod:"nginx-deployment-bb8f74bfb-p9vgh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali25df0ca93a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.645 [INFO][3994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.645 [INFO][3994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" iface="eth0" netns="" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.645 [INFO][3994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.645 [INFO][3994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.673 [INFO][4001] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.673 [INFO][4001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.673 [INFO][4001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.686 [WARNING][4001] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.686 [INFO][4001] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.691 [INFO][4001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:59.694182 containerd[1985]: 2026-01-24 00:56:59.692 [INFO][3994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.694992 containerd[1985]: time="2026-01-24T00:56:59.694221084Z" level=info msg="TearDown network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" successfully" Jan 24 00:56:59.694992 containerd[1985]: time="2026-01-24T00:56:59.694250291Z" level=info msg="StopPodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" returns successfully" Jan 24 00:56:59.695347 containerd[1985]: time="2026-01-24T00:56:59.695305806Z" level=info msg="RemovePodSandbox for \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" Jan 24 00:56:59.695347 containerd[1985]: time="2026-01-24T00:56:59.695343816Z" level=info msg="Forcibly stopping sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\"" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.746 [WARNING][4015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0", GenerateName:"nginx-deployment-bb8f74bfb-", Namespace:"default", SelfLink:"", UID:"9fc6456f-b5ce-4052-90a7-2a6ce48af167", ResourceVersion:"1370", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"bb8f74bfb", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"c12a3cc455d6379d0519df3cc74502b4e1bcd3f21b35d7ff7d33ed4bf8e4c0ad", Pod:"nginx-deployment-bb8f74bfb-p9vgh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali25df0ca93a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.747 [INFO][4015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.747 [INFO][4015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" iface="eth0" netns="" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.747 [INFO][4015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.747 [INFO][4015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.770 [INFO][4023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.770 [INFO][4023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.770 [INFO][4023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.778 [WARNING][4023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.778 [INFO][4023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" HandleID="k8s-pod-network.135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Workload="172.31.30.66-k8s-nginx--deployment--bb8f74bfb--p9vgh-eth0" Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.780 [INFO][4023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:56:59.783262 containerd[1985]: 2026-01-24 00:56:59.781 [INFO][4015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386" Jan 24 00:56:59.783745 containerd[1985]: time="2026-01-24T00:56:59.783307173Z" level=info msg="TearDown network for sandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" successfully" Jan 24 00:56:59.788428 containerd[1985]: time="2026-01-24T00:56:59.788196627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:56:59.788428 containerd[1985]: time="2026-01-24T00:56:59.788353148Z" level=info msg="RemovePodSandbox \"135bb5312a1c76d8a06065fb509fabfda4611fe4e4ecfb664689e35c677f2386\" returns successfully" Jan 24 00:57:00.361762 kubelet[2444]: E0124 00:57:00.361690 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:01.362705 kubelet[2444]: E0124 00:57:01.362489 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:02.363905 kubelet[2444]: E0124 00:57:02.363859 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:03.364708 kubelet[2444]: E0124 00:57:03.364663 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:03.486828 containerd[1985]: time="2026-01-24T00:57:03.486793639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:57:03.729245 containerd[1985]: time="2026-01-24T00:57:03.729197342Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:03.731205 containerd[1985]: time="2026-01-24T00:57:03.731158730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:57:03.731343 containerd[1985]: time="2026-01-24T00:57:03.731185927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:57:03.731475 kubelet[2444]: E0124 00:57:03.731436 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:03.731554 kubelet[2444]: E0124 00:57:03.731483 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:57:03.731584 kubelet[2444]: E0124 00:57:03.731571 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:03.733121 containerd[1985]: time="2026-01-24T00:57:03.733077186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:57:04.011544 containerd[1985]: time="2026-01-24T00:57:04.011408304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:57:04.013622 containerd[1985]: time="2026-01-24T00:57:04.013520042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:57:04.013748 containerd[1985]: time="2026-01-24T00:57:04.013531431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:57:04.014296 kubelet[2444]: E0124 00:57:04.013799 2444 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:04.014296 kubelet[2444]: E0124 00:57:04.014259 2444 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:57:04.014407 kubelet[2444]: E0124 00:57:04.014330 2444 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-2x7tg_calico-system(46fbf850-4138-4783-94c3-ae492c179748): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:57:04.014407 kubelet[2444]: E0124 00:57:04.014371 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:57:04.365697 kubelet[2444]: E0124 00:57:04.365573 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:05.365959 kubelet[2444]: E0124 00:57:05.365903 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:06.366497 kubelet[2444]: E0124 00:57:06.366415 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:07.366622 kubelet[2444]: E0124 00:57:07.366580 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:08.367275 kubelet[2444]: E0124 00:57:08.367151 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:09.367521 kubelet[2444]: E0124 00:57:09.367462 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:10.367872 kubelet[2444]: E0124 00:57:10.367814 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:11.368637 kubelet[2444]: E0124 00:57:11.368567 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:12.023090 systemd[1]: Created slice kubepods-besteffort-pod45d395e8_e9fb_4426_a6ea_16a95a19123b.slice - libcontainer container kubepods-besteffort-pod45d395e8_e9fb_4426_a6ea_16a95a19123b.slice. Jan 24 00:57:12.101320 kubelet[2444]: I0124 00:57:12.101276 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0f5283d-958b-44a7-b998-2abf800910dc\" (UniqueName: \"kubernetes.io/nfs/45d395e8-e9fb-4426-a6ea-16a95a19123b-pvc-a0f5283d-958b-44a7-b998-2abf800910dc\") pod \"test-pod-1\" (UID: \"45d395e8-e9fb-4426-a6ea-16a95a19123b\") " pod="default/test-pod-1" Jan 24 00:57:12.101320 kubelet[2444]: I0124 00:57:12.101318 2444 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvq7\" (UniqueName: \"kubernetes.io/projected/45d395e8-e9fb-4426-a6ea-16a95a19123b-kube-api-access-pwvq7\") pod \"test-pod-1\" (UID: \"45d395e8-e9fb-4426-a6ea-16a95a19123b\") " pod="default/test-pod-1" Jan 24 00:57:12.242601 kernel: FS-Cache: Loaded Jan 24 00:57:12.312760 kernel: RPC: Registered named UNIX socket transport module. Jan 24 00:57:12.312848 kernel: RPC: Registered udp transport module. Jan 24 00:57:12.313903 kernel: RPC: Registered tcp transport module. Jan 24 00:57:12.313977 kernel: RPC: Registered tcp-with-tls transport module. Jan 24 00:57:12.314812 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 24 00:57:12.368959 kubelet[2444]: E0124 00:57:12.368892 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:12.609672 kernel: NFS: Registering the id_resolver key type Jan 24 00:57:12.609796 kernel: Key type id_resolver registered Jan 24 00:57:12.610784 kernel: Key type id_legacy registered Jan 24 00:57:12.644930 nfsidmap[4069]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 24 00:57:12.649400 nfsidmap[4070]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 24 00:57:12.930992 containerd[1985]: time="2026-01-24T00:57:12.930873095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:45d395e8-e9fb-4426-a6ea-16a95a19123b,Namespace:default,Attempt:0,}" Jan 24 00:57:13.095984 (udev-worker)[4063]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:57:13.097949 systemd-networkd[1619]: cali5ec59c6bf6e: Link UP Jan 24 00:57:13.099220 systemd-networkd[1619]: cali5ec59c6bf6e: Gained carrier Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:12.996 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.66-k8s-test--pod--1-eth0 default 45d395e8-e9fb-4426-a6ea-16a95a19123b 1564 0 2026-01-24 00:56:47 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.30.66 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:12.996 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.031 [INFO][4083] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" HandleID="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Workload="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.031 [INFO][4083] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" HandleID="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Workload="172.31.30.66-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.66", "pod":"test-pod-1", "timestamp":"2026-01-24 00:57:13.03165471 +0000 UTC"}, Hostname:"172.31.30.66", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.032 [INFO][4083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.032 [INFO][4083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.032 [INFO][4083] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.66' Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.045 [INFO][4083] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.052 [INFO][4083] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.058 [INFO][4083] ipam/ipam.go 511: Trying affinity for 192.168.65.128/26 host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.061 [INFO][4083] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.065 [INFO][4083] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.065 [INFO][4083] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.067 [INFO][4083] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556 Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.073 [INFO][4083] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.090 [INFO][4083] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.132/26] block=192.168.65.128/26 handle="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.090 [INFO][4083] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.132/26] handle="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" host="172.31.30.66" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.091 [INFO][4083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.091 [INFO][4083] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.132/26] IPv6=[] ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" HandleID="k8s-pod-network.bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Workload="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118009 containerd[1985]: 2026-01-24 00:57:13.093 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"45d395e8-e9fb-4426-a6ea-16a95a19123b", ResourceVersion:"1564", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:13.118953 containerd[1985]: 2026-01-24 00:57:13.093 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.132/32] ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118953 containerd[1985]: 2026-01-24 00:57:13.093 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118953 containerd[1985]: 2026-01-24 00:57:13.098 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.118953 containerd[1985]: 2026-01-24 00:57:13.100 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.66-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"45d395e8-e9fb-4426-a6ea-16a95a19123b", ResourceVersion:"1564", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.66", ContainerID:"bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:53:51:36:46:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:57:13.118953 containerd[1985]: 2026-01-24 00:57:13.116 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.66-k8s-test--pod--1-eth0" Jan 24 00:57:13.144613 containerd[1985]: time="2026-01-24T00:57:13.144423423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:57:13.146579 containerd[1985]: time="2026-01-24T00:57:13.144579157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:57:13.146775 containerd[1985]: time="2026-01-24T00:57:13.146604778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:13.147308 containerd[1985]: time="2026-01-24T00:57:13.146756174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:57:13.170752 systemd[1]: Started cri-containerd-bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556.scope - libcontainer container bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556. Jan 24 00:57:13.212258 containerd[1985]: time="2026-01-24T00:57:13.212123976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:45d395e8-e9fb-4426-a6ea-16a95a19123b,Namespace:default,Attempt:0,} returns sandbox id \"bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556\"" Jan 24 00:57:13.213830 containerd[1985]: time="2026-01-24T00:57:13.213787052Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 24 00:57:13.369371 kubelet[2444]: E0124 00:57:13.369249 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:13.511399 containerd[1985]: time="2026-01-24T00:57:13.511288856Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:57:13.513273 containerd[1985]: time="2026-01-24T00:57:13.513207001Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 24 00:57:13.516044 containerd[1985]: time="2026-01-24T00:57:13.515980210Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"63836358\" in 302.151994ms" Jan 24 00:57:13.516044 containerd[1985]: time="2026-01-24T00:57:13.516020250Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:94a30ed18e45363486a55379bd1fbb8c479f36524ae816779f3553afe6b787ed\"" Jan 24 00:57:13.523840 containerd[1985]: time="2026-01-24T00:57:13.523783936Z" level=info msg="CreateContainer within sandbox \"bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 24 00:57:13.546496 containerd[1985]: time="2026-01-24T00:57:13.546451273Z" level=info msg="CreateContainer within sandbox \"bbb77f72eb14a769dcc3da4d347e1cb52a504fe740ed59d2ddb8766669cd1556\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f72bfb3527db3a8c8fad5e6e9ca31a51315adb6bb5d80bd760ff946f0423ea26\"" Jan 24 00:57:13.550225 containerd[1985]: time="2026-01-24T00:57:13.550179202Z" level=info msg="StartContainer for \"f72bfb3527db3a8c8fad5e6e9ca31a51315adb6bb5d80bd760ff946f0423ea26\"" Jan 24 00:57:13.587773 systemd[1]: Started cri-containerd-f72bfb3527db3a8c8fad5e6e9ca31a51315adb6bb5d80bd760ff946f0423ea26.scope - libcontainer container f72bfb3527db3a8c8fad5e6e9ca31a51315adb6bb5d80bd760ff946f0423ea26. Jan 24 00:57:13.624974 containerd[1985]: time="2026-01-24T00:57:13.624931723Z" level=info msg="StartContainer for \"f72bfb3527db3a8c8fad5e6e9ca31a51315adb6bb5d80bd760ff946f0423ea26\" returns successfully" Jan 24 00:57:13.715877 kubelet[2444]: I0124 00:57:13.715812 2444 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=26.412211321 podStartE2EDuration="26.715792765s" podCreationTimestamp="2026-01-24 00:56:47 +0000 UTC" firstStartedPulling="2026-01-24 00:57:13.213212222 +0000 UTC m=+74.355205835" lastFinishedPulling="2026-01-24 00:57:13.516793666 +0000 UTC m=+74.658787279" observedRunningTime="2026-01-24 00:57:13.715553569 +0000 UTC m=+74.857547206" watchObservedRunningTime="2026-01-24 00:57:13.715792765 +0000 UTC m=+74.857786402" Jan 24 00:57:14.369810 kubelet[2444]: E0124 00:57:14.369762 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:14.747743 systemd-networkd[1619]: cali5ec59c6bf6e: Gained IPv6LL Jan 24 00:57:15.370194 kubelet[2444]: E0124 00:57:15.370115 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:16.370989 kubelet[2444]: E0124 00:57:16.370847 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:16.990541 ntpd[1962]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:57:16.990933 ntpd[1962]: 24 Jan 00:57:16 ntpd[1962]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:57:17.372058 kubelet[2444]: E0124 00:57:17.371902 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:18.372956 kubelet[2444]: E0124 00:57:18.372888 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:19.314807 kubelet[2444]: E0124 00:57:19.314730 2444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:19.373633 kubelet[2444]: E0124 00:57:19.373593 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:19.487916 kubelet[2444]: E0124 00:57:19.487816 2444 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2x7tg" podUID="46fbf850-4138-4783-94c3-ae492c179748" Jan 24 00:57:19.597488 systemd[1]: run-containerd-runc-k8s.io-8350fd8ee950a463c21b0a00d4ec8aa66e782cbd8e3a99ec6d4572b9863ec304-runc.pG9SzM.mount: Deactivated successfully. Jan 24 00:57:20.374434 kubelet[2444]: E0124 00:57:20.374388 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:21.375371 kubelet[2444]: E0124 00:57:21.375318 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 24 00:57:22.376081 kubelet[2444]: E0124 00:57:22.376021 2444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"