Jan 17 00:01:47.217309 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 00:01:47.217362 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:01:47.217388 kernel: KASLR disabled due to lack of seed Jan 17 00:01:47.217405 kernel: efi: EFI v2.7 by EDK II Jan 17 00:01:47.217421 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 17 00:01:47.217437 kernel: ACPI: Early table checksum verification disabled Jan 17 00:01:47.217455 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 00:01:47.217471 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:01:47.217487 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:01:47.217503 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:01:47.217524 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:01:47.217540 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 00:01:47.217556 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 00:01:47.217572 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 00:01:47.217592 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:01:47.217613 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 00:01:47.217631 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 00:01:47.217647 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 00:01:47.217664 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 00:01:47.217680 kernel: printk: bootconsole [uart0] enabled Jan 17 00:01:47.217697 kernel: NUMA: Failed to initialise from firmware Jan 17 00:01:47.217714 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 00:01:47.217730 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 00:01:47.217747 kernel: Zone ranges: Jan 17 00:01:47.217763 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 00:01:47.217780 kernel: DMA32 empty Jan 17 00:01:47.217801 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 00:01:47.217818 kernel: Movable zone start for each node Jan 17 00:01:47.217834 kernel: Early memory node ranges Jan 17 00:01:47.217851 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 00:01:47.217867 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 00:01:47.217884 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 00:01:47.217900 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 00:01:47.217942 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 00:01:47.217990 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 00:01:47.218008 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 00:01:47.218025 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 00:01:47.218042 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 00:01:47.218065 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 00:01:47.218083 kernel: psci: probing for conduit method from ACPI. Jan 17 00:01:47.218107 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 00:01:47.218125 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:01:47.218143 kernel: psci: Trusted OS migration not required Jan 17 00:01:47.218165 kernel: psci: SMC Calling Convention v1.1 Jan 17 00:01:47.218183 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 17 00:01:47.218201 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:01:47.218218 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:01:47.218236 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:01:47.218254 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:01:47.218272 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:01:47.218289 kernel: CPU features: detected: Spectre-v2 Jan 17 00:01:47.218306 kernel: CPU features: detected: Spectre-v3a Jan 17 00:01:47.218324 kernel: CPU features: detected: Spectre-BHB Jan 17 00:01:47.218341 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 00:01:47.218363 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 00:01:47.218380 kernel: alternatives: applying boot alternatives Jan 17 00:01:47.218400 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:47.218419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:01:47.218436 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:01:47.218453 kernel: Fallback order for Node 0: 0 Jan 17 00:01:47.218471 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 00:01:47.218488 kernel: Policy zone: Normal Jan 17 00:01:47.218506 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:01:47.218523 kernel: software IO TLB: area num 2. Jan 17 00:01:47.218540 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 00:01:47.218563 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 17 00:01:47.218581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:01:47.218598 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:01:47.218616 kernel: rcu: RCU event tracing is enabled. Jan 17 00:01:47.218634 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:01:47.218652 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:01:47.218670 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:01:47.218688 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:01:47.218705 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:01:47.218741 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:01:47.218765 kernel: GICv3: 96 SPIs implemented Jan 17 00:01:47.218789 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:01:47.218806 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:01:47.218824 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 00:01:47.218842 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 00:01:47.218860 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 00:01:47.222814 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 00:01:47.222838 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 00:01:47.222856 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 00:01:47.222874 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 00:01:47.222892 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 00:01:47.222909 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:01:47.222949 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 00:01:47.222977 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 00:01:47.222995 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 00:01:47.223013 kernel: Console: colour dummy device 80x25 Jan 17 00:01:47.223031 kernel: printk: console [tty1] enabled Jan 17 00:01:47.223049 kernel: ACPI: Core revision 20230628 Jan 17 00:01:47.223067 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 00:01:47.223085 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:01:47.223104 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:01:47.223121 kernel: landlock: Up and running. Jan 17 00:01:47.223144 kernel: SELinux: Initializing. Jan 17 00:01:47.223162 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:47.223179 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:47.223197 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:47.223215 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:47.223233 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:01:47.223252 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:01:47.223270 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 00:01:47.223288 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 00:01:47.223310 kernel: Remapping and enabling EFI services. Jan 17 00:01:47.223328 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:01:47.223346 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:01:47.223363 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 00:01:47.223381 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 00:01:47.223399 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 00:01:47.223417 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:01:47.223435 kernel: SMP: Total of 2 processors activated. Jan 17 00:01:47.223452 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:01:47.223474 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 00:01:47.223492 kernel: CPU features: detected: CRC32 instructions Jan 17 00:01:47.223510 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:01:47.223539 kernel: alternatives: applying system-wide alternatives Jan 17 00:01:47.223562 kernel: devtmpfs: initialized Jan 17 00:01:47.223580 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:01:47.223599 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:01:47.223617 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:01:47.223636 kernel: SMBIOS 3.0.0 present. Jan 17 00:01:47.223659 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 00:01:47.223677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:01:47.223696 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:01:47.223715 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:01:47.223734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:01:47.223752 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:01:47.223771 kernel: audit: type=2000 audit(0.285:1): state=initialized audit_enabled=0 res=1 Jan 17 00:01:47.223792 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:01:47.223816 kernel: cpuidle: using governor menu Jan 17 00:01:47.223836 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:01:47.223856 kernel: ASID allocator initialised with 65536 entries Jan 17 00:01:47.223875 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:01:47.223893 kernel: Serial: AMBA PL011 UART driver Jan 17 00:01:47.223913 kernel: Modules: 17488 pages in range for non-PLT usage Jan 17 00:01:47.223955 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:01:47.223975 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:01:47.223994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:01:47.224020 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:01:47.224040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:01:47.224059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:01:47.224078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:01:47.224097 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:01:47.224130 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:01:47.224151 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:01:47.224171 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:01:47.224189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:01:47.224214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:01:47.224233 kernel: ACPI: Interpreter enabled Jan 17 00:01:47.224251 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:01:47.224270 kernel: ACPI: MCFG table detected, 1 entries Jan 17 00:01:47.224288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 17 00:01:47.224604 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:01:47.224839 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:01:47.225101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:01:47.225328 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 17 00:01:47.225535 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 17 00:01:47.225561 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 00:01:47.225592 kernel: acpiphp: Slot [1] registered Jan 17 00:01:47.225615 kernel: acpiphp: Slot [2] registered Jan 17 00:01:47.225635 kernel: acpiphp: Slot [3] registered Jan 17 00:01:47.225654 kernel: acpiphp: Slot [4] registered Jan 17 00:01:47.225672 kernel: acpiphp: Slot [5] registered Jan 17 00:01:47.225698 kernel: acpiphp: Slot [6] registered Jan 17 00:01:47.225717 kernel: acpiphp: Slot [7] registered Jan 17 00:01:47.225736 kernel: acpiphp: Slot [8] registered Jan 17 00:01:47.225762 kernel: acpiphp: Slot [9] registered Jan 17 00:01:47.225797 kernel: acpiphp: Slot [10] registered Jan 17 00:01:47.225845 kernel: acpiphp: Slot [11] registered Jan 17 00:01:47.225888 kernel: acpiphp: Slot [12] registered Jan 17 00:01:47.225934 kernel: acpiphp: Slot [13] registered Jan 17 00:01:47.225960 kernel: acpiphp: Slot [14] registered Jan 17 00:01:47.225980 kernel: acpiphp: Slot [15] registered Jan 17 00:01:47.226005 kernel: acpiphp: Slot [16] registered Jan 17 00:01:47.226024 kernel: acpiphp: Slot [17] registered Jan 17 00:01:47.226042 kernel: acpiphp: Slot [18] registered Jan 17 00:01:47.226061 kernel: acpiphp: Slot [19] registered Jan 17 00:01:47.226080 kernel: acpiphp: Slot [20] registered Jan 17 00:01:47.226098 kernel: acpiphp: Slot [21] registered Jan 17 00:01:47.226117 kernel: acpiphp: Slot [22] registered Jan 17 00:01:47.226136 kernel: acpiphp: Slot [23] registered Jan 17 00:01:47.226154 kernel: acpiphp: Slot [24] registered Jan 17 00:01:47.226177 kernel: acpiphp: Slot [25] registered Jan 17 00:01:47.226196 kernel: acpiphp: Slot [26] registered Jan 17 00:01:47.226214 kernel: acpiphp: Slot [27] registered Jan 17 00:01:47.226233 kernel: acpiphp: Slot [28] registered Jan 17 00:01:47.226262 kernel: acpiphp: Slot [29] registered Jan 17 00:01:47.226281 kernel: acpiphp: Slot [30] registered Jan 17 00:01:47.226300 kernel: acpiphp: Slot [31] registered Jan 17 00:01:47.226318 kernel: PCI host bridge to bus 0000:00 Jan 17 00:01:47.226573 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 00:01:47.226819 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 00:01:47.227070 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 00:01:47.227261 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 17 00:01:47.227498 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 00:01:47.227725 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 00:01:47.228033 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 00:01:47.228278 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:01:47.228490 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 00:01:47.228706 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 00:01:47.229021 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:01:47.229245 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 00:01:47.229451 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 00:01:47.229655 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 00:01:47.229867 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 00:01:47.230105 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 00:01:47.230288 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 00:01:47.230470 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 00:01:47.230496 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 00:01:47.230516 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 00:01:47.230535 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 00:01:47.230554 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 00:01:47.230578 kernel: iommu: Default domain type: Translated Jan 17 00:01:47.230597 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:01:47.230616 kernel: efivars: Registered efivars operations Jan 17 00:01:47.230634 kernel: vgaarb: loaded Jan 17 00:01:47.230653 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:01:47.230671 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:01:47.230690 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:01:47.230709 kernel: pnp: PnP ACPI init Jan 17 00:01:47.230967 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 00:01:47.231002 kernel: pnp: PnP ACPI: found 1 devices Jan 17 00:01:47.231022 kernel: NET: Registered PF_INET protocol family Jan 17 00:01:47.231040 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:01:47.231059 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:01:47.231078 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:01:47.231098 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:01:47.231157 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:01:47.231210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:01:47.231236 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:47.231256 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:47.231274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:01:47.231293 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:01:47.231311 kernel: kvm [1]: HYP mode not available Jan 17 00:01:47.231331 kernel: Initialise system trusted keyrings Jan 17 00:01:47.231349 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:01:47.231368 kernel: Key type asymmetric registered Jan 17 00:01:47.231386 kernel: Asymmetric key parser 'x509' registered Jan 17 00:01:47.231409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:01:47.231447 kernel: io scheduler mq-deadline registered Jan 17 00:01:47.231470 kernel: io scheduler kyber registered Jan 17 00:01:47.231489 kernel: io scheduler bfq registered Jan 17 00:01:47.231738 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 00:01:47.231767 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 00:01:47.231807 kernel: ACPI: button: Power Button [PWRB] Jan 17 00:01:47.231847 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 00:01:47.231882 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 00:01:47.231911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:01:47.231974 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 00:01:47.232205 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 00:01:47.232232 kernel: printk: console [ttyS0] disabled Jan 17 00:01:47.232252 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 00:01:47.232271 kernel: printk: console [ttyS0] enabled Jan 17 00:01:47.232290 kernel: printk: bootconsole [uart0] disabled Jan 17 00:01:47.232308 kernel: thunder_xcv, ver 1.0 Jan 17 00:01:47.232327 kernel: thunder_bgx, ver 1.0 Jan 17 00:01:47.232352 kernel: nicpf, ver 1.0 Jan 17 00:01:47.232371 kernel: nicvf, ver 1.0 Jan 17 00:01:47.232588 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:01:47.232789 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:01:46 UTC (1768608106) Jan 17 00:01:47.232814 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:01:47.232834 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 00:01:47.232852 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:01:47.232871 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:01:47.232895 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:01:47.232946 kernel: Segment Routing with IPv6 Jan 17 00:01:47.232971 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:01:47.232990 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:01:47.233009 kernel: Key type dns_resolver registered Jan 17 00:01:47.233027 kernel: registered taskstats version 1 Jan 17 00:01:47.233045 kernel: Loading compiled-in X.509 certificates Jan 17 00:01:47.233064 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:01:47.233082 kernel: Key type .fscrypt registered Jan 17 00:01:47.233107 kernel: Key type fscrypt-provisioning registered Jan 17 00:01:47.233126 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:01:47.233144 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:01:47.233163 kernel: ima: No architecture policies found Jan 17 00:01:47.233181 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:01:47.233200 kernel: clk: Disabling unused clocks Jan 17 00:01:47.233218 kernel: Freeing unused kernel memory: 39424K Jan 17 00:01:47.233245 kernel: Run /init as init process Jan 17 00:01:47.233285 kernel: with arguments: Jan 17 00:01:47.233332 kernel: /init Jan 17 00:01:47.233352 kernel: with environment: Jan 17 00:01:47.233370 kernel: HOME=/ Jan 17 00:01:47.233389 kernel: TERM=linux Jan 17 00:01:47.233413 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:47.233436 systemd[1]: Detected virtualization amazon. Jan 17 00:01:47.233457 systemd[1]: Detected architecture arm64. Jan 17 00:01:47.233477 systemd[1]: Running in initrd. Jan 17 00:01:47.233503 systemd[1]: No hostname configured, using default hostname. Jan 17 00:01:47.233523 systemd[1]: Hostname set to . Jan 17 00:01:47.233544 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:47.233565 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:01:47.237633 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:47.237689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:47.237713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:01:47.237734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:47.237766 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:01:47.237787 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:01:47.237811 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:01:47.237833 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:01:47.237853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:47.237874 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:47.237899 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:47.237955 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:47.237979 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:47.237999 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:47.238020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:47.238040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:47.238061 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:01:47.238082 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:01:47.238102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:47.238128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:47.238149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:47.238170 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:47.238190 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:01:47.238210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:47.238231 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:01:47.238251 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:01:47.238271 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:47.238292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:47.238317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:47.238380 systemd-journald[251]: Collecting audit messages is disabled. Jan 17 00:01:47.238426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:47.238447 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:47.238473 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:01:47.238495 systemd-journald[251]: Journal started Jan 17 00:01:47.238537 systemd-journald[251]: Runtime Journal (/run/log/journal/ec29bf583b860ec4709bd73778cce38d) is 8.0M, max 75.3M, 67.3M free. Jan 17 00:01:47.226326 systemd-modules-load[252]: Inserted module 'overlay' Jan 17 00:01:47.255886 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:47.269132 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:01:47.269204 kernel: Bridge firewalling registered Jan 17 00:01:47.269844 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 17 00:01:47.272214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:47.285281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:47.293988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:47.294559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:47.319358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:47.325768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:47.328614 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:47.337256 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:47.350189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:47.374036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:47.387212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:47.397032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:47.417007 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:47.436263 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:01:47.461318 dracut-cmdline[290]: dracut-dracut-053 Jan 17 00:01:47.472410 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:47.496366 systemd-resolved[279]: Positive Trust Anchors: Jan 17 00:01:47.496401 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:47.496466 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:47.645941 kernel: SCSI subsystem initialized Jan 17 00:01:47.652956 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:01:47.664959 kernel: iscsi: registered transport (tcp) Jan 17 00:01:47.687959 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:01:47.688030 kernel: QLogic iSCSI HBA Driver Jan 17 00:01:47.749976 kernel: random: crng init done Jan 17 00:01:47.750247 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 17 00:01:47.754305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:47.757958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:47.785016 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:47.797318 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:01:47.835116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:01:47.835201 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:01:47.837366 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:01:47.918950 kernel: raid6: neonx8 gen() 6706 MB/s Jan 17 00:01:47.920980 kernel: raid6: neonx4 gen() 6503 MB/s Jan 17 00:01:47.937962 kernel: raid6: neonx2 gen() 5440 MB/s Jan 17 00:01:47.954954 kernel: raid6: neonx1 gen() 3949 MB/s Jan 17 00:01:47.971952 kernel: raid6: int64x8 gen() 3805 MB/s Jan 17 00:01:47.988957 kernel: raid6: int64x4 gen() 3698 MB/s Jan 17 00:01:48.005953 kernel: raid6: int64x2 gen() 3578 MB/s Jan 17 00:01:48.024017 kernel: raid6: int64x1 gen() 2752 MB/s Jan 17 00:01:48.024056 kernel: raid6: using algorithm neonx8 gen() 6706 MB/s Jan 17 00:01:48.042996 kernel: raid6: .... xor() 4828 MB/s, rmw enabled Jan 17 00:01:48.043041 kernel: raid6: using neon recovery algorithm Jan 17 00:01:48.050955 kernel: xor: measuring software checksum speed Jan 17 00:01:48.053333 kernel: 8regs : 10276 MB/sec Jan 17 00:01:48.053367 kernel: 32regs : 11571 MB/sec Jan 17 00:01:48.054623 kernel: arm64_neon : 9557 MB/sec Jan 17 00:01:48.054655 kernel: xor: using function: 32regs (11571 MB/sec) Jan 17 00:01:48.140973 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:01:48.161981 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:48.184201 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:48.217468 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 17 00:01:48.227691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:48.239224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:01:48.284861 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jan 17 00:01:48.341399 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:48.353406 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:48.478288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:48.492550 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:01:48.543413 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:48.549656 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:48.558324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:48.562943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:48.578184 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:01:48.618608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:48.685270 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 00:01:48.685334 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 00:01:48.692094 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:01:48.692499 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:01:48.700057 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:1f:32:9d:bd:71 Jan 17 00:01:48.706257 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:48.709852 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:48.712323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:48.719165 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:48.725445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:48.725738 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:48.745737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:48.756350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:48.775623 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 00:01:48.775686 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:01:48.787951 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:01:48.798168 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:01:48.798245 kernel: GPT:9289727 != 33554431 Jan 17 00:01:48.798272 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:01:48.800222 kernel: GPT:9289727 != 33554431 Jan 17 00:01:48.800267 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:01:48.802286 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:48.804010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:48.816262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:48.880288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:48.897986 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (517) Jan 17 00:01:48.936964 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (529) Jan 17 00:01:49.016268 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:01:49.047529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:01:49.065991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:01:49.082933 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:01:49.088897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:01:49.103222 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:01:49.123472 disk-uuid[661]: Primary Header is updated. Jan 17 00:01:49.123472 disk-uuid[661]: Secondary Entries is updated. Jan 17 00:01:49.123472 disk-uuid[661]: Secondary Header is updated. Jan 17 00:01:49.130940 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:49.157983 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:50.157990 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:50.160807 disk-uuid[662]: The operation has completed successfully. Jan 17 00:01:50.334701 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:01:50.334979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:01:50.397235 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:01:50.422611 sh[920]: Success Jan 17 00:01:50.450978 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:01:50.544131 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:01:50.574137 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:01:50.582857 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:01:50.621608 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:01:50.621670 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:50.623643 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:01:50.625095 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:01:50.626363 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:01:50.708973 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:01:50.734584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:01:50.740801 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:01:50.750347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:01:50.757262 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:01:50.793743 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:50.793815 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:50.795260 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:50.801976 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:50.823764 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:01:50.826049 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:50.837090 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:01:50.849277 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:01:50.944185 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:50.962317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:51.024983 systemd-networkd[1113]: lo: Link UP Jan 17 00:01:51.025004 systemd-networkd[1113]: lo: Gained carrier Jan 17 00:01:51.027871 systemd-networkd[1113]: Enumeration completed Jan 17 00:01:51.028827 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:51.028834 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:51.030159 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:51.034286 systemd[1]: Reached target network.target - Network. Jan 17 00:01:51.051634 systemd-networkd[1113]: eth0: Link UP Jan 17 00:01:51.051647 systemd-networkd[1113]: eth0: Gained carrier Jan 17 00:01:51.051665 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:51.074008 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.23.180/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:01:51.302493 ignition[1037]: Ignition 2.19.0 Jan 17 00:01:51.302976 ignition[1037]: Stage: fetch-offline Jan 17 00:01:51.305007 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:51.305033 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:51.305901 ignition[1037]: Ignition finished successfully Jan 17 00:01:51.316001 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:51.328258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:01:51.353997 ignition[1125]: Ignition 2.19.0 Jan 17 00:01:51.354024 ignition[1125]: Stage: fetch Jan 17 00:01:51.355966 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:51.356034 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:51.356208 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:51.376674 ignition[1125]: PUT result: OK Jan 17 00:01:51.380490 ignition[1125]: parsed url from cmdline: "" Jan 17 00:01:51.380636 ignition[1125]: no config URL provided Jan 17 00:01:51.380665 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:01:51.380691 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:01:51.380723 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:51.385187 ignition[1125]: PUT result: OK Jan 17 00:01:51.385393 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:01:51.393893 ignition[1125]: GET result: OK Jan 17 00:01:51.395475 ignition[1125]: parsing config with SHA512: 29d51f9b1f701549dd160f6289746750111f56981c0fffff6acafa8eee48a1ae6b628b172c09e443979630c06d50afc9e983efe24b139de15a801e877d9fd54d Jan 17 00:01:51.401240 unknown[1125]: fetched base config from "system" Jan 17 00:01:51.401262 unknown[1125]: fetched base config from "system" Jan 17 00:01:51.402190 ignition[1125]: fetch: fetch complete Jan 17 00:01:51.401276 unknown[1125]: fetched user config from "aws" Jan 17 00:01:51.402202 ignition[1125]: fetch: fetch passed Jan 17 00:01:51.402304 ignition[1125]: Ignition finished successfully Jan 17 00:01:51.415506 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:01:51.426301 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:01:51.463212 ignition[1132]: Ignition 2.19.0 Jan 17 00:01:51.463242 ignition[1132]: Stage: kargs Jan 17 00:01:51.465148 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:51.465214 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:51.466478 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:51.471178 ignition[1132]: PUT result: OK Jan 17 00:01:51.476954 ignition[1132]: kargs: kargs passed Jan 17 00:01:51.477062 ignition[1132]: Ignition finished successfully Jan 17 00:01:51.479377 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:01:51.491399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:01:51.522336 ignition[1138]: Ignition 2.19.0 Jan 17 00:01:51.522372 ignition[1138]: Stage: disks Jan 17 00:01:51.524278 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:51.524304 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:51.525566 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:51.528342 ignition[1138]: PUT result: OK Jan 17 00:01:51.537317 ignition[1138]: disks: disks passed Jan 17 00:01:51.538932 ignition[1138]: Ignition finished successfully Jan 17 00:01:51.542498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:01:51.547721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:51.550857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:01:51.559145 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:51.561506 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:51.564255 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:51.573365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:01:51.620839 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:01:51.625024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:01:51.637297 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:01:51.736254 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:01:51.738608 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:01:51.742765 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:51.762090 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:51.771079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:01:51.773662 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:01:51.773740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:01:51.773786 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:51.796955 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1165) Jan 17 00:01:51.801955 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:51.802017 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:51.802057 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:51.808955 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:51.811668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:51.821228 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:01:51.831297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:01:52.150901 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:01:52.172910 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:01:52.181792 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:01:52.191427 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:01:52.581468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:52.592188 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:01:52.601209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:01:52.619646 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:01:52.622519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:52.656118 systemd-networkd[1113]: eth0: Gained IPv6LL Jan 17 00:01:52.672098 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:01:52.683317 ignition[1277]: INFO : Ignition 2.19.0 Jan 17 00:01:52.686383 ignition[1277]: INFO : Stage: mount Jan 17 00:01:52.686383 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:52.686383 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:52.686383 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:52.695861 ignition[1277]: INFO : PUT result: OK Jan 17 00:01:52.700653 ignition[1277]: INFO : mount: mount passed Jan 17 00:01:52.704374 ignition[1277]: INFO : Ignition finished successfully Jan 17 00:01:52.702548 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:01:52.717129 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:01:52.745349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:52.769949 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1290) Jan 17 00:01:52.774044 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:52.774087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:52.775416 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:52.780957 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:52.784623 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:52.821884 ignition[1307]: INFO : Ignition 2.19.0 Jan 17 00:01:52.824031 ignition[1307]: INFO : Stage: files Jan 17 00:01:52.824031 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:52.824031 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:52.824031 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:52.834095 ignition[1307]: INFO : PUT result: OK Jan 17 00:01:52.838532 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:01:52.842196 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:01:52.842196 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:01:52.863187 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:01:52.867114 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:01:52.870753 unknown[1307]: wrote ssh authorized keys file for user: core Jan 17 00:01:52.873709 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:01:52.877988 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:52.877988 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:52.877988 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:52.877988 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:52.877988 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:01:52.899895 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:01:52.899895 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:01:52.899895 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 17 00:01:53.389716 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 00:01:53.849485 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 17 00:01:53.854969 ignition[1307]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:53.854969 ignition[1307]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:53.854969 ignition[1307]: INFO : files: files passed Jan 17 00:01:53.854969 ignition[1307]: INFO : Ignition finished successfully Jan 17 00:01:53.861767 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:01:53.879252 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:01:53.884348 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:01:53.912502 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:01:53.915099 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:01:53.935518 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:53.939459 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:53.939459 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:53.949993 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:53.955842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:01:53.967288 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:01:54.018663 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:01:54.019093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:01:54.028556 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:01:54.031045 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:01:54.033441 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:01:54.046230 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:01:54.077883 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:54.089382 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:01:54.117106 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:54.117522 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:54.119384 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:01:54.120134 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:01:54.120446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:54.121687 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:01:54.122608 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:01:54.123455 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:01:54.124264 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:54.125101 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:54.125938 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:01:54.126765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:54.127632 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:01:54.128439 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:01:54.129257 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:01:54.129954 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:01:54.130176 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:54.132010 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:54.132459 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:54.132713 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:01:54.163548 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:54.211676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:01:54.211956 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:54.218941 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:01:54.219393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:54.227820 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:01:54.230198 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:01:54.242301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:01:54.245256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:01:54.245532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:54.269365 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:01:54.274179 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:01:54.275190 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:54.283406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:01:54.284884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:54.313975 ignition[1360]: INFO : Ignition 2.19.0 Jan 17 00:01:54.313975 ignition[1360]: INFO : Stage: umount Jan 17 00:01:54.316418 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:01:54.319633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:01:54.326067 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:54.326067 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:54.326067 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:54.335180 ignition[1360]: INFO : PUT result: OK Jan 17 00:01:54.339779 ignition[1360]: INFO : umount: umount passed Jan 17 00:01:54.339779 ignition[1360]: INFO : Ignition finished successfully Jan 17 00:01:54.348882 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:01:54.350175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:01:54.358081 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:01:54.358601 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:01:54.366086 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:01:54.366195 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:01:54.368667 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:01:54.368751 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:01:54.371205 systemd[1]: Stopped target network.target - Network. Jan 17 00:01:54.373241 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:01:54.373327 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:54.376674 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:01:54.378758 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:01:54.380975 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:54.383803 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:01:54.385899 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:01:54.388405 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:01:54.388492 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:54.390863 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:01:54.390980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:54.393371 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:01:54.393461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:01:54.396296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:01:54.396386 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:54.399255 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:01:54.402030 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:01:54.411995 systemd-networkd[1113]: eth0: DHCPv6 lease lost Jan 17 00:01:54.413862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:01:54.418843 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:01:54.426895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:01:54.437206 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:01:54.438908 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:01:54.444141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:01:54.444218 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:54.456178 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:01:54.463023 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:01:54.463641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:54.470663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:01:54.470788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:54.473385 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:01:54.473705 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:54.479989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:01:54.480086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:54.485179 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:54.532905 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:01:54.538053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:54.547746 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:01:54.548003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:01:54.559802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:01:54.559959 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:54.572095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:01:54.572178 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:54.593894 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:01:54.594690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:54.599020 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:01:54.599119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:54.608258 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:54.608350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:54.611300 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:01:54.611390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:54.634315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:01:54.637113 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:01:54.637224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:54.648656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:54.648763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:54.652185 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:01:54.652371 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:01:54.665865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:01:54.666981 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:01:54.677779 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:01:54.687659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:01:54.705388 systemd[1]: Switching root. Jan 17 00:01:54.762009 systemd-journald[251]: Journal stopped Jan 17 00:01:57.071688 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 17 00:01:57.071834 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:01:57.071882 kernel: SELinux: policy capability open_perms=1 Jan 17 00:01:57.074277 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:01:57.074316 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:01:57.074347 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:01:57.074379 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:01:57.074410 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:01:57.074439 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:01:57.074471 kernel: audit: type=1403 audit(1768608115.177:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:01:57.074503 systemd[1]: Successfully loaded SELinux policy in 82.823ms. Jan 17 00:01:57.074555 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.386ms. Jan 17 00:01:57.074595 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:57.074629 systemd[1]: Detected virtualization amazon. Jan 17 00:01:57.074661 systemd[1]: Detected architecture arm64. Jan 17 00:01:57.074693 systemd[1]: Detected first boot. Jan 17 00:01:57.074749 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:57.074784 zram_generator::config[1403]: No configuration found. Jan 17 00:01:57.074818 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:01:57.074849 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:01:57.074887 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:01:57.075967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:57.076034 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:01:57.076069 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:01:57.076104 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:01:57.076145 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:01:57.076180 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:01:57.076215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:01:57.076249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:01:57.076596 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:01:57.076644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:57.076690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:57.076724 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:01:57.076754 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:01:57.076794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:01:57.076826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:57.076856 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:01:57.076887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:57.076935 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:01:57.076971 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:01:57.077002 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:57.077034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:01:57.077071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:57.077104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:57.077135 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:57.077169 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:57.077199 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:01:57.077235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:01:57.077265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:57.077295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:57.077328 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:57.077361 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:01:57.077391 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:01:57.077421 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:01:57.077452 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:01:57.077484 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:01:57.077515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:01:57.077546 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:01:57.077577 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:01:57.077606 systemd[1]: Reached target machines.target - Containers. Jan 17 00:01:57.077641 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:01:57.077671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:57.077701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:57.077730 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:01:57.077763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:57.077793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:57.077823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:57.077852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:01:57.077886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:57.081976 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:01:57.082055 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:01:57.082087 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:01:57.082120 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:01:57.082152 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:01:57.082185 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:57.082218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:57.082249 kernel: fuse: init (API version 7.39) Jan 17 00:01:57.082291 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:01:57.082323 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:01:57.082358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:57.082393 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:01:57.082426 systemd[1]: Stopped verity-setup.service. Jan 17 00:01:57.082458 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:01:57.082489 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:01:57.082520 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:01:57.082558 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:01:57.082595 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:01:57.082628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:01:57.082663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:57.082696 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:01:57.082765 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:01:57.082801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:57.082832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:57.082862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:57.082891 kernel: loop: module loaded Jan 17 00:01:57.087976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:57.088049 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:01:57.088084 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:01:57.088117 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:57.088155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:57.088188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:57.088224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:01:57.088255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:01:57.088285 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:01:57.088368 systemd-journald[1485]: Collecting audit messages is disabled. Jan 17 00:01:57.088439 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:01:57.088471 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:01:57.088502 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:01:57.088531 systemd-journald[1485]: Journal started Jan 17 00:01:57.088595 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec29bf583b860ec4709bd73778cce38d) is 8.0M, max 75.3M, 67.3M free. Jan 17 00:01:56.394142 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:01:56.445757 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:01:56.446582 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:01:57.100782 kernel: ACPI: bus type drm_connector registered Jan 17 00:01:57.100847 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:57.100889 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:01:57.127981 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:01:57.140038 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:01:57.140134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:57.159326 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:01:57.159416 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:57.178044 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:01:57.178146 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:57.178185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:57.197987 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:01:57.214796 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:57.219020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:01:57.223681 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:57.224063 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:57.226973 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:01:57.230063 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:01:57.236056 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:01:57.254977 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:01:57.311172 kernel: loop0: detected capacity change from 0 to 211168 Jan 17 00:01:57.309716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:01:57.319298 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:01:57.334224 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:01:57.341288 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:01:57.378496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:57.389385 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec29bf583b860ec4709bd73778cce38d is 118.774ms for 887 entries. Jan 17 00:01:57.389385 systemd-journald[1485]: System Journal (/var/log/journal/ec29bf583b860ec4709bd73778cce38d) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:01:57.518542 systemd-journald[1485]: Received client request to flush runtime journal. Jan 17 00:01:57.518631 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:01:57.518668 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 00:01:57.452138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:57.465373 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:01:57.470512 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:01:57.472997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:01:57.526038 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:01:57.535133 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:01:57.550176 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:57.554027 udevadm[1547]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:01:57.602348 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jan 17 00:01:57.602388 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Jan 17 00:01:57.613074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:57.639963 kernel: loop2: detected capacity change from 0 to 52536 Jan 17 00:01:57.709970 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 00:01:57.831968 kernel: loop4: detected capacity change from 0 to 211168 Jan 17 00:01:57.859965 kernel: loop5: detected capacity change from 0 to 114328 Jan 17 00:01:57.872965 kernel: loop6: detected capacity change from 0 to 52536 Jan 17 00:01:57.889956 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 00:01:57.907303 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:01:57.908308 (sd-merge)[1559]: Merged extensions into '/usr'. Jan 17 00:01:57.918022 systemd[1]: Reloading requested from client PID 1515 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:01:57.918056 systemd[1]: Reloading... Jan 17 00:01:58.100970 zram_generator::config[1585]: No configuration found. Jan 17 00:01:58.377850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:58.496290 systemd[1]: Reloading finished in 576 ms. Jan 17 00:01:58.535996 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:01:58.540688 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:01:58.560211 systemd[1]: Starting ensure-sysext.service... Jan 17 00:01:58.566270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:58.580321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:58.588301 systemd[1]: Reloading requested from client PID 1637 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:01:58.588336 systemd[1]: Reloading... Jan 17 00:01:58.653379 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:01:58.656187 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:01:58.658052 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:01:58.658598 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 17 00:01:58.658759 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 17 00:01:58.667233 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:58.667261 systemd-tmpfiles[1638]: Skipping /boot Jan 17 00:01:58.705380 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:58.705411 systemd-tmpfiles[1638]: Skipping /boot Jan 17 00:01:58.727823 systemd-udevd[1639]: Using default interface naming scheme 'v255'. Jan 17 00:01:58.806971 zram_generator::config[1672]: No configuration found. Jan 17 00:01:58.827474 ldconfig[1511]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:01:59.025516 (udev-worker)[1677]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:59.176592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:59.328973 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1703) Jan 17 00:01:59.340474 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:01:59.341267 systemd[1]: Reloading finished in 752 ms. Jan 17 00:01:59.380630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:59.384800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:01:59.389575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:59.459396 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:59.474950 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:01:59.483297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:01:59.493308 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:59.503282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:59.514244 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:01:59.525454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:59.548454 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:01:59.555151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:59.559615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:59.568362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:59.575663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:59.583052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:59.587606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:59.588265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:59.602199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:59.611000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:59.613568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:59.614010 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:01:59.620762 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:01:59.624760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:59.625188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:59.656242 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:01:59.659545 systemd[1]: Finished ensure-sysext.service. Jan 17 00:01:59.668954 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:59.669378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:59.687231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:01:59.709092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:59.772572 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:59.773307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:59.780974 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:01:59.788895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:59.790116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:59.794329 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:01:59.805153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:59.805257 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:01:59.856411 augenrules[1873]: No rules Jan 17 00:01:59.856860 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:59.883372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:01:59.897349 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:01:59.900280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:01:59.907040 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:01:59.919839 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:01:59.955460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:02:00.011136 lvm[1886]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:02:00.013049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:02:00.056230 systemd-networkd[1823]: lo: Link UP Jan 17 00:02:00.056672 systemd-networkd[1823]: lo: Gained carrier Jan 17 00:02:00.059451 systemd-networkd[1823]: Enumeration completed Jan 17 00:02:00.060236 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:02:00.065736 systemd-networkd[1823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:00.065849 systemd-networkd[1823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:02:00.068260 systemd-networkd[1823]: eth0: Link UP Jan 17 00:02:00.068560 systemd-networkd[1823]: eth0: Gained carrier Jan 17 00:02:00.068593 systemd-networkd[1823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:02:00.072261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:02:00.076959 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:02:00.084843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:02:00.088108 systemd-networkd[1823]: eth0: DHCPv4 address 172.31.23.180/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:02:00.097269 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:02:00.120117 lvm[1896]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:02:00.128592 systemd-resolved[1825]: Positive Trust Anchors: Jan 17 00:02:00.128623 systemd-resolved[1825]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:02:00.128685 systemd-resolved[1825]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:02:00.143494 systemd-resolved[1825]: Defaulting to hostname 'linux'. Jan 17 00:02:00.146820 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:02:00.149754 systemd[1]: Reached target network.target - Network. Jan 17 00:02:00.151905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:02:00.154758 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:02:00.157390 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:02:00.160509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:02:00.163785 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:02:00.166485 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:02:00.169442 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:02:00.172394 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:02:00.172454 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:02:00.174619 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:02:00.177535 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:02:00.182809 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:02:00.202196 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:02:00.205948 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:02:00.209336 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:02:00.213193 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:02:00.215826 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:02:00.218149 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:02:00.218223 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:02:00.225126 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:02:00.232300 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:02:00.240388 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:02:00.247136 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:02:00.252127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:02:00.254494 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:02:00.260366 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:02:00.275377 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:02:00.283194 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:02:00.292272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:02:00.302599 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:02:00.314726 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:02:00.323340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:02:00.324232 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:02:00.333166 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:02:00.341035 jq[1903]: false Jan 17 00:02:00.341200 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:02:00.351448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:02:00.353887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:02:00.407785 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:02:00.408221 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:02:00.446981 jq[1912]: true Jan 17 00:02:00.471685 dbus-daemon[1902]: [system] SELinux support is enabled Jan 17 00:02:00.476307 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:02:00.485731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:02:00.485802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:02:00.489982 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:02:00.490023 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:02:00.499466 dbus-daemon[1902]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1823 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:02:00.502563 dbus-daemon[1902]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:02:00.524244 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:02:00.542942 extend-filesystems[1904]: Found loop4 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found loop5 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found loop6 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found loop7 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p1 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p2 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p3 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found usr Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p4 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p6 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p7 Jan 17 00:02:00.542942 extend-filesystems[1904]: Found nvme0n1p9 Jan 17 00:02:00.542942 extend-filesystems[1904]: Checking size of /dev/nvme0n1p9 Jan 17 00:02:00.567744 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:02:00.581196 jq[1931]: true Jan 17 00:02:00.568145 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:02:00.601858 (ntainerd)[1933]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: ---------------------------------------------------- Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: corporation. Support and training for ntp-4 are Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: available at https://www.nwtime.org/support Jan 17 00:02:00.604025 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: ---------------------------------------------------- Jan 17 00:02:00.602078 ntpd[1906]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:02:00.602124 ntpd[1906]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:02:00.602145 ntpd[1906]: ---------------------------------------------------- Jan 17 00:02:00.602165 ntpd[1906]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:02:00.602185 ntpd[1906]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:02:00.602204 ntpd[1906]: corporation. Support and training for ntp-4 are Jan 17 00:02:00.602223 ntpd[1906]: available at https://www.nwtime.org/support Jan 17 00:02:00.602241 ntpd[1906]: ---------------------------------------------------- Jan 17 00:02:00.609855 ntpd[1906]: proto: precision = 0.096 usec (-23) Jan 17 00:02:00.610243 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: proto: precision = 0.096 usec (-23) Jan 17 00:02:00.611115 ntpd[1906]: basedate set to 2026-01-04 Jan 17 00:02:00.616948 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: basedate set to 2026-01-04 Jan 17 00:02:00.616948 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: gps base set to 2026-01-04 (week 2400) Jan 17 00:02:00.615018 ntpd[1906]: gps base set to 2026-01-04 (week 2400) Jan 17 00:02:00.619691 ntpd[1906]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:02:00.620051 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:02:00.620051 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:02:00.619782 ntpd[1906]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:02:00.620391 ntpd[1906]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:02:00.620523 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:02:00.620684 ntpd[1906]: Listen normally on 3 eth0 172.31.23.180:123 Jan 17 00:02:00.623072 ntpd[1906]: Listen normally on 4 lo [::1]:123 Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listen normally on 3 eth0 172.31.23.180:123 Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listen normally on 4 lo [::1]:123 Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: bind(21) AF_INET6 fe80::41f:32ff:fe9d:bd71%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: unable to create socket on eth0 (5) for fe80::41f:32ff:fe9d:bd71%2#123 Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: failed to init interface for address fe80::41f:32ff:fe9d:bd71%2 Jan 17 00:02:00.624154 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: Listening on routing socket on fd #21 for interface updates Jan 17 00:02:00.623156 ntpd[1906]: bind(21) AF_INET6 fe80::41f:32ff:fe9d:bd71%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:02:00.623196 ntpd[1906]: unable to create socket on eth0 (5) for fe80::41f:32ff:fe9d:bd71%2#123 Jan 17 00:02:00.623225 ntpd[1906]: failed to init interface for address fe80::41f:32ff:fe9d:bd71%2 Jan 17 00:02:00.623283 ntpd[1906]: Listening on routing socket on fd #21 for interface updates Jan 17 00:02:00.643221 ntpd[1906]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:02:00.646180 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:02:00.646180 ntpd[1906]: 17 Jan 00:02:00 ntpd[1906]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:02:00.643272 ntpd[1906]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:02:00.653748 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:02:00.674243 extend-filesystems[1904]: Resized partition /dev/nvme0n1p9 Jan 17 00:02:00.683619 extend-filesystems[1951]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:02:00.695529 update_engine[1911]: I20260117 00:02:00.695118 1911 main.cc:92] Flatcar Update Engine starting Jan 17 00:02:00.702747 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:02:00.713720 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:02:00.720584 update_engine[1911]: I20260117 00:02:00.720228 1911 update_check_scheduler.cc:74] Next update check in 9m47s Jan 17 00:02:00.737552 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:02:00.757216 coreos-metadata[1901]: Jan 17 00:02:00.756 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:02:00.760318 coreos-metadata[1901]: Jan 17 00:02:00.759 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:02:00.769649 coreos-metadata[1901]: Jan 17 00:02:00.768 INFO Fetch successful Jan 17 00:02:00.769649 coreos-metadata[1901]: Jan 17 00:02:00.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:02:00.773993 coreos-metadata[1901]: Jan 17 00:02:00.773 INFO Fetch successful Jan 17 00:02:00.773993 coreos-metadata[1901]: Jan 17 00:02:00.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:02:00.775224 coreos-metadata[1901]: Jan 17 00:02:00.774 INFO Fetch successful Jan 17 00:02:00.775224 coreos-metadata[1901]: Jan 17 00:02:00.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:02:00.776556 coreos-metadata[1901]: Jan 17 00:02:00.775 INFO Fetch successful Jan 17 00:02:00.778156 coreos-metadata[1901]: Jan 17 00:02:00.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:02:00.778674 coreos-metadata[1901]: Jan 17 00:02:00.778 INFO Fetch failed with 404: resource not found Jan 17 00:02:00.778674 coreos-metadata[1901]: Jan 17 00:02:00.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:02:00.779901 coreos-metadata[1901]: Jan 17 00:02:00.779 INFO Fetch successful Jan 17 00:02:00.779901 coreos-metadata[1901]: Jan 17 00:02:00.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:02:00.782308 coreos-metadata[1901]: Jan 17 00:02:00.782 INFO Fetch successful Jan 17 00:02:00.782308 coreos-metadata[1901]: Jan 17 00:02:00.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:02:00.783309 coreos-metadata[1901]: Jan 17 00:02:00.783 INFO Fetch successful Jan 17 00:02:00.783309 coreos-metadata[1901]: Jan 17 00:02:00.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:02:00.785090 coreos-metadata[1901]: Jan 17 00:02:00.784 INFO Fetch successful Jan 17 00:02:00.785090 coreos-metadata[1901]: Jan 17 00:02:00.785 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:02:00.792088 coreos-metadata[1901]: Jan 17 00:02:00.786 INFO Fetch successful Jan 17 00:02:00.799144 systemd-logind[1910]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:02:00.799197 systemd-logind[1910]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 00:02:00.799632 systemd-logind[1910]: New seat seat0. Jan 17 00:02:00.808076 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:02:00.877701 bash[1967]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:02:00.886191 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:02:00.894973 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:02:00.896432 systemd[1]: Starting sshkeys.service... Jan 17 00:02:00.912960 extend-filesystems[1951]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:02:00.912960 extend-filesystems[1951]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:02:00.912960 extend-filesystems[1951]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:02:00.918444 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:02:00.953112 extend-filesystems[1904]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:02:00.920034 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:02:00.965667 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:02:00.974211 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:02:01.031037 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1714) Jan 17 00:02:01.029460 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:02:01.054515 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:02:01.115136 dbus-daemon[1902]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:02:01.115534 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:02:01.119401 dbus-daemon[1902]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1938 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:02:01.140825 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:02:01.187727 locksmithd[1957]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:02:01.219133 polkitd[2013]: Started polkitd version 121 Jan 17 00:02:01.264503 polkitd[2013]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:02:01.264644 polkitd[2013]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:02:01.273977 polkitd[2013]: Finished loading, compiling and executing 2 rules Jan 17 00:02:01.274813 dbus-daemon[1902]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:02:01.275226 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:02:01.284071 polkitd[2013]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:02:01.363110 systemd-hostnamed[1938]: Hostname set to (transient) Jan 17 00:02:01.364013 systemd-resolved[1825]: System hostname changed to 'ip-172-31-23-180'. Jan 17 00:02:01.380949 coreos-metadata[1991]: Jan 17 00:02:01.375 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:02:01.381970 coreos-metadata[1991]: Jan 17 00:02:01.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:02:01.383791 coreos-metadata[1991]: Jan 17 00:02:01.383 INFO Fetch successful Jan 17 00:02:01.383791 coreos-metadata[1991]: Jan 17 00:02:01.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:02:01.389503 coreos-metadata[1991]: Jan 17 00:02:01.388 INFO Fetch successful Jan 17 00:02:01.392202 unknown[1991]: wrote ssh authorized keys file for user: core Jan 17 00:02:01.438300 update-ssh-keys[2069]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:02:01.442236 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:02:01.455364 systemd[1]: Finished sshkeys.service. Jan 17 00:02:01.493038 containerd[1933]: time="2026-01-17T00:02:01.492776710Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:02:01.603226 ntpd[1906]: bind(24) AF_INET6 fe80::41f:32ff:fe9d:bd71%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:02:01.603299 ntpd[1906]: unable to create socket on eth0 (6) for fe80::41f:32ff:fe9d:bd71%2#123 Jan 17 00:02:01.603737 ntpd[1906]: 17 Jan 00:02:01 ntpd[1906]: bind(24) AF_INET6 fe80::41f:32ff:fe9d:bd71%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:02:01.603737 ntpd[1906]: 17 Jan 00:02:01 ntpd[1906]: unable to create socket on eth0 (6) for fe80::41f:32ff:fe9d:bd71%2#123 Jan 17 00:02:01.603737 ntpd[1906]: 17 Jan 00:02:01 ntpd[1906]: failed to init interface for address fe80::41f:32ff:fe9d:bd71%2 Jan 17 00:02:01.603329 ntpd[1906]: failed to init interface for address fe80::41f:32ff:fe9d:bd71%2 Jan 17 00:02:01.606976 containerd[1933]: time="2026-01-17T00:02:01.606571487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612319787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612383435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612418259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612743183Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612779387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612894767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.612954107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.613234007Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.613265279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.613295039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:02:01.613946 containerd[1933]: time="2026-01-17T00:02:01.613323947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.614455 containerd[1933]: time="2026-01-17T00:02:01.613480535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.614455 containerd[1933]: time="2026-01-17T00:02:01.613868987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:02:01.614806 containerd[1933]: time="2026-01-17T00:02:01.614767367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:02:01.614967 containerd[1933]: time="2026-01-17T00:02:01.614887523Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:02:01.615246 containerd[1933]: time="2026-01-17T00:02:01.615217571Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:02:01.615431 containerd[1933]: time="2026-01-17T00:02:01.615405359Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:02:01.616221 systemd-networkd[1823]: eth0: Gained IPv6LL Jan 17 00:02:01.622721 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:02:01.628718 containerd[1933]: time="2026-01-17T00:02:01.628490723Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:02:01.628718 containerd[1933]: time="2026-01-17T00:02:01.628601339Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:02:01.628718 containerd[1933]: time="2026-01-17T00:02:01.628638599Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:02:01.634470 containerd[1933]: time="2026-01-17T00:02:01.628683503Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:02:01.629255 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:02:01.637232 containerd[1933]: time="2026-01-17T00:02:01.634403051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:02:01.637808 containerd[1933]: time="2026-01-17T00:02:01.637578287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:02:01.638384 containerd[1933]: time="2026-01-17T00:02:01.638338247Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:02:01.639075 containerd[1933]: time="2026-01-17T00:02:01.638785715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:02:01.639075 containerd[1933]: time="2026-01-17T00:02:01.638836043Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:02:01.639075 containerd[1933]: time="2026-01-17T00:02:01.638869103Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:02:01.639426 containerd[1933]: time="2026-01-17T00:02:01.638901551Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639426 containerd[1933]: time="2026-01-17T00:02:01.639285635Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639426 containerd[1933]: time="2026-01-17T00:02:01.639363659Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639743 containerd[1933]: time="2026-01-17T00:02:01.639400055Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639743 containerd[1933]: time="2026-01-17T00:02:01.639606047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639743 containerd[1933]: time="2026-01-17T00:02:01.639663311Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.639743 containerd[1933]: time="2026-01-17T00:02:01.639698435Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.640253 containerd[1933]: time="2026-01-17T00:02:01.639999251Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:02:01.640253 containerd[1933]: time="2026-01-17T00:02:01.640052915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640253 containerd[1933]: time="2026-01-17T00:02:01.640127483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640253 containerd[1933]: time="2026-01-17T00:02:01.640184735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640739 containerd[1933]: time="2026-01-17T00:02:01.640228751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640739 containerd[1933]: time="2026-01-17T00:02:01.640515551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640739 containerd[1933]: time="2026-01-17T00:02:01.640575191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640739 containerd[1933]: time="2026-01-17T00:02:01.640609235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.640739 containerd[1933]: time="2026-01-17T00:02:01.640692299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641254 containerd[1933]: time="2026-01-17T00:02:01.641035223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641254 containerd[1933]: time="2026-01-17T00:02:01.641077811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641254 containerd[1933]: time="2026-01-17T00:02:01.641137319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641674 containerd[1933]: time="2026-01-17T00:02:01.641169059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641674 containerd[1933]: time="2026-01-17T00:02:01.641457587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641674 containerd[1933]: time="2026-01-17T00:02:01.641518775Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:02:01.641674 containerd[1933]: time="2026-01-17T00:02:01.641609051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.641674 containerd[1933]: time="2026-01-17T00:02:01.641641427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.642375 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:02:01.649590 containerd[1933]: time="2026-01-17T00:02:01.641905871Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:02:01.650255 containerd[1933]: time="2026-01-17T00:02:01.649772111Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:02:01.650255 containerd[1933]: time="2026-01-17T00:02:01.650050331Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:02:01.650255 containerd[1933]: time="2026-01-17T00:02:01.650105663Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:02:01.650255 containerd[1933]: time="2026-01-17T00:02:01.650137643Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:02:01.650255 containerd[1933]: time="2026-01-17T00:02:01.650193383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.651090 containerd[1933]: time="2026-01-17T00:02:01.650225663Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:02:01.651616 containerd[1933]: time="2026-01-17T00:02:01.650741891Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:02:01.651616 containerd[1933]: time="2026-01-17T00:02:01.651311471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:02:01.652561 containerd[1933]: time="2026-01-17T00:02:01.652398683Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:02:01.653090 containerd[1933]: time="2026-01-17T00:02:01.652880987Z" level=info msg="Connect containerd service" Jan 17 00:02:01.653090 containerd[1933]: time="2026-01-17T00:02:01.652983791Z" level=info msg="using legacy CRI server" Jan 17 00:02:01.653090 containerd[1933]: time="2026-01-17T00:02:01.653004251Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:02:01.653472 containerd[1933]: time="2026-01-17T00:02:01.653328347Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:02:01.655394 containerd[1933]: time="2026-01-17T00:02:01.654715619Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:02:01.655773 containerd[1933]: time="2026-01-17T00:02:01.655734143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:02:01.656278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:01.661552 containerd[1933]: time="2026-01-17T00:02:01.661478519Z" level=info msg="Start subscribing containerd event" Jan 17 00:02:01.661750 containerd[1933]: time="2026-01-17T00:02:01.661723223Z" level=info msg="Start recovering state" Jan 17 00:02:01.662041 containerd[1933]: time="2026-01-17T00:02:01.662001683Z" level=info msg="Start event monitor" Jan 17 00:02:01.662149 containerd[1933]: time="2026-01-17T00:02:01.662123555Z" level=info msg="Start snapshots syncer" Jan 17 00:02:01.662398 containerd[1933]: time="2026-01-17T00:02:01.662367131Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:02:01.662509 containerd[1933]: time="2026-01-17T00:02:01.662483423Z" level=info msg="Start streaming server" Jan 17 00:02:01.662981 containerd[1933]: time="2026-01-17T00:02:01.662903831Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:02:01.664512 containerd[1933]: time="2026-01-17T00:02:01.664456283Z" level=info msg="containerd successfully booted in 0.175824s" Jan 17 00:02:01.673486 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:02:01.684249 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:02:01.774714 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:02:01.793184 amazon-ssm-agent[2102]: Initializing new seelog logger Jan 17 00:02:01.793626 amazon-ssm-agent[2102]: New Seelog Logger Creation Complete Jan 17 00:02:01.793626 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.793626 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.794971 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 processing appconfig overrides Jan 17 00:02:01.795306 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO Proxy environment variables: Jan 17 00:02:01.795448 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.796109 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.797218 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 processing appconfig overrides Jan 17 00:02:01.798508 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.798508 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.798703 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 processing appconfig overrides Jan 17 00:02:01.806949 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.806949 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:02:01.806949 amazon-ssm-agent[2102]: 2026/01/17 00:02:01 processing appconfig overrides Jan 17 00:02:01.896939 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO https_proxy: Jan 17 00:02:01.961714 sshd_keygen[1944]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:02:01.984606 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:02:01.999063 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO http_proxy: Jan 17 00:02:02.039291 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:02:02.053523 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:02:02.070469 systemd[1]: Started sshd@0-172.31.23.180:22-68.220.241.50:43380.service - OpenSSH per-connection server daemon (68.220.241.50:43380). Jan 17 00:02:02.097449 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO no_proxy: Jan 17 00:02:02.102132 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:02:02.105036 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:02:02.121453 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:02:02.162629 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:02:02.175633 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:02:02.194678 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:02:02.196542 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:02:02.203562 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:02:02.294872 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:02:02.394379 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO Agent will take identity from EC2 Jan 17 00:02:02.493789 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:02:02.593278 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:02:02.620475 sshd[2129]: Accepted publickey for core from 68.220.241.50 port 43380 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:02.627127 sshd[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:02.652346 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:02:02.667783 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:02:02.679269 systemd-logind[1910]: New session 1 of user core. Jan 17 00:02:02.692807 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:02:02.716586 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:02:02.732645 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:02:02.760011 (systemd)[2142]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:02:02.792888 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:02:02.893043 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [Registrar] Starting registrar module Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:02 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:02 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:02 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:02:02.898963 amazon-ssm-agent[2102]: 2026-01-17 00:02:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:02:02.992366 amazon-ssm-agent[2102]: 2026-01-17 00:02:02 INFO [CredentialRefresher] Next credential rotation will be in 31.7499801454 minutes Jan 17 00:02:03.012796 systemd[2142]: Queued start job for default target default.target. Jan 17 00:02:03.025078 systemd[2142]: Created slice app.slice - User Application Slice. Jan 17 00:02:03.025285 systemd[2142]: Reached target paths.target - Paths. Jan 17 00:02:03.025318 systemd[2142]: Reached target timers.target - Timers. Jan 17 00:02:03.027993 systemd[2142]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:02:03.058726 systemd[2142]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:02:03.058999 systemd[2142]: Reached target sockets.target - Sockets. Jan 17 00:02:03.060268 systemd[2142]: Reached target basic.target - Basic System. Jan 17 00:02:03.060403 systemd[2142]: Reached target default.target - Main User Target. Jan 17 00:02:03.060470 systemd[2142]: Startup finished in 288ms. Jan 17 00:02:03.060495 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:02:03.072256 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:02:03.463485 systemd[1]: Started sshd@1-172.31.23.180:22-68.220.241.50:49130.service - OpenSSH per-connection server daemon (68.220.241.50:49130). Jan 17 00:02:03.933246 amazon-ssm-agent[2102]: 2026-01-17 00:02:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:02:04.002065 sshd[2154]: Accepted publickey for core from 68.220.241.50 port 49130 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:04.004394 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:04.018344 systemd-logind[1910]: New session 2 of user core. Jan 17 00:02:04.034558 amazon-ssm-agent[2102]: 2026-01-17 00:02:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2157) started Jan 17 00:02:04.036295 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:02:04.056248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:04.061147 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:02:04.065347 systemd[1]: Startup finished in 1.179s (kernel) + 8.330s (initrd) + 8.970s (userspace) = 18.480s. Jan 17 00:02:04.076565 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:02:04.134840 amazon-ssm-agent[2102]: 2026-01-17 00:02:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:02:04.382864 sshd[2154]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:04.389031 systemd-logind[1910]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:02:04.390477 systemd[1]: sshd@1-172.31.23.180:22-68.220.241.50:49130.service: Deactivated successfully. Jan 17 00:02:04.394333 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:02:04.398897 systemd-logind[1910]: Removed session 2. Jan 17 00:02:04.483911 systemd[1]: Started sshd@2-172.31.23.180:22-68.220.241.50:49132.service - OpenSSH per-connection server daemon (68.220.241.50:49132). Jan 17 00:02:04.603217 ntpd[1906]: Listen normally on 7 eth0 [fe80::41f:32ff:fe9d:bd71%2]:123 Jan 17 00:02:04.603980 ntpd[1906]: 17 Jan 00:02:04 ntpd[1906]: Listen normally on 7 eth0 [fe80::41f:32ff:fe9d:bd71%2]:123 Jan 17 00:02:05.026042 sshd[2186]: Accepted publickey for core from 68.220.241.50 port 49132 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:05.029158 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:05.040134 systemd-logind[1910]: New session 3 of user core. Jan 17 00:02:05.048282 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:02:05.240870 kubelet[2166]: E0117 00:02:05.240780 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:02:05.245371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:02:05.245704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:05.247089 systemd[1]: kubelet.service: Consumed 1.394s CPU time. Jan 17 00:02:05.406276 sshd[2186]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:05.412347 systemd[1]: sshd@2-172.31.23.180:22-68.220.241.50:49132.service: Deactivated successfully. Jan 17 00:02:05.415423 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:02:05.418450 systemd-logind[1910]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:02:05.420541 systemd-logind[1910]: Removed session 3. Jan 17 00:02:05.491455 systemd[1]: Started sshd@3-172.31.23.180:22-68.220.241.50:49146.service - OpenSSH per-connection server daemon (68.220.241.50:49146). Jan 17 00:02:05.985747 sshd[2195]: Accepted publickey for core from 68.220.241.50 port 49146 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:05.988478 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:05.996781 systemd-logind[1910]: New session 4 of user core. Jan 17 00:02:06.006176 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:02:06.341233 sshd[2195]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:06.347494 systemd[1]: sshd@3-172.31.23.180:22-68.220.241.50:49146.service: Deactivated successfully. Jan 17 00:02:06.350505 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:02:06.351858 systemd-logind[1910]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:02:06.354871 systemd-logind[1910]: Removed session 4. Jan 17 00:02:06.450421 systemd[1]: Started sshd@4-172.31.23.180:22-68.220.241.50:49162.service - OpenSSH per-connection server daemon (68.220.241.50:49162). Jan 17 00:02:06.986609 sshd[2202]: Accepted publickey for core from 68.220.241.50 port 49162 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:06.989278 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:06.999037 systemd-logind[1910]: New session 5 of user core. Jan 17 00:02:07.009188 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:02:07.346187 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:02:07.347912 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:07.367533 sudo[2205]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:07.453350 sshd[2202]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:07.460270 systemd[1]: sshd@4-172.31.23.180:22-68.220.241.50:49162.service: Deactivated successfully. Jan 17 00:02:07.463653 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:02:07.465719 systemd-logind[1910]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:02:07.467574 systemd-logind[1910]: Removed session 5. Jan 17 00:02:07.553450 systemd[1]: Started sshd@5-172.31.23.180:22-68.220.241.50:49164.service - OpenSSH per-connection server daemon (68.220.241.50:49164). Jan 17 00:02:08.037635 systemd-resolved[1825]: Clock change detected. Flushing caches. Jan 17 00:02:08.516936 sshd[2210]: Accepted publickey for core from 68.220.241.50 port 49164 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:08.519992 sshd[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:08.527710 systemd-logind[1910]: New session 6 of user core. Jan 17 00:02:08.539790 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:02:08.815001 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:02:08.816275 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:08.822409 sudo[2214]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:08.832453 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:02:08.833117 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:08.858022 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:02:08.861138 auditctl[2217]: No rules Jan 17 00:02:08.863282 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:02:08.863823 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:02:08.874173 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:02:08.928839 augenrules[2235]: No rules Jan 17 00:02:08.931493 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:02:08.933804 sudo[2213]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:09.017978 sshd[2210]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:09.022798 systemd[1]: sshd@5-172.31.23.180:22-68.220.241.50:49164.service: Deactivated successfully. Jan 17 00:02:09.026557 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:02:09.030971 systemd-logind[1910]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:02:09.033480 systemd-logind[1910]: Removed session 6. Jan 17 00:02:09.121062 systemd[1]: Started sshd@6-172.31.23.180:22-68.220.241.50:49180.service - OpenSSH per-connection server daemon (68.220.241.50:49180). Jan 17 00:02:09.652173 sshd[2243]: Accepted publickey for core from 68.220.241.50 port 49180 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:09.654739 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:09.662155 systemd-logind[1910]: New session 7 of user core. Jan 17 00:02:09.671769 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:02:09.949072 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:02:09.949765 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:11.245013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:11.245343 systemd[1]: kubelet.service: Consumed 1.394s CPU time. Jan 17 00:02:11.263147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:11.315271 systemd[1]: Reloading requested from client PID 2282 ('systemctl') (unit session-7.scope)... Jan 17 00:02:11.315479 systemd[1]: Reloading... Jan 17 00:02:11.554625 zram_generator::config[2325]: No configuration found. Jan 17 00:02:11.792594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:11.967258 systemd[1]: Reloading finished in 650 ms. Jan 17 00:02:12.069313 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:02:12.069591 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:02:12.070198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:12.077402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:12.404842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:12.413136 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:12.485438 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:12.485949 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:12.486048 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:12.486287 kubelet[2386]: I0117 00:02:12.486235 2386 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:13.390098 kubelet[2386]: I0117 00:02:13.390029 2386 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:02:13.390098 kubelet[2386]: I0117 00:02:13.390077 2386 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:13.390490 kubelet[2386]: I0117 00:02:13.390442 2386 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:02:13.433607 kubelet[2386]: I0117 00:02:13.433066 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:13.446496 kubelet[2386]: E0117 00:02:13.444923 2386 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:13.446496 kubelet[2386]: I0117 00:02:13.444976 2386 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:13.449918 kubelet[2386]: I0117 00:02:13.449884 2386 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:02:13.452630 kubelet[2386]: I0117 00:02:13.452579 2386 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:13.453092 kubelet[2386]: I0117 00:02:13.452794 2386 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.23.180","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:02:13.453439 kubelet[2386]: I0117 00:02:13.453415 2386 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:13.453561 kubelet[2386]: I0117 00:02:13.453525 2386 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:02:13.454029 kubelet[2386]: I0117 00:02:13.454007 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:13.461082 kubelet[2386]: I0117 00:02:13.461043 2386 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:02:13.461289 kubelet[2386]: I0117 00:02:13.461268 2386 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:13.461416 kubelet[2386]: I0117 00:02:13.461398 2386 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:02:13.463875 kubelet[2386]: I0117 00:02:13.463848 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:13.465319 kubelet[2386]: E0117 00:02:13.465208 2386 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:13.465455 kubelet[2386]: E0117 00:02:13.465356 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:13.466778 kubelet[2386]: I0117 00:02:13.466691 2386 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:13.468067 kubelet[2386]: I0117 00:02:13.468018 2386 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:02:13.468295 kubelet[2386]: W0117 00:02:13.468254 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:02:13.473085 kubelet[2386]: I0117 00:02:13.473034 2386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:02:13.473204 kubelet[2386]: I0117 00:02:13.473121 2386 server.go:1289] "Started kubelet" Jan 17 00:02:13.474652 kubelet[2386]: I0117 00:02:13.473299 2386 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:13.475973 kubelet[2386]: I0117 00:02:13.475943 2386 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:02:13.478935 kubelet[2386]: I0117 00:02:13.478777 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:13.479643 kubelet[2386]: I0117 00:02:13.479411 2386 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:13.484946 kubelet[2386]: I0117 00:02:13.484840 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:13.489681 kubelet[2386]: E0117 00:02:13.489638 2386 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:13.493620 kubelet[2386]: I0117 00:02:13.492057 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:13.506031 kubelet[2386]: E0117 00:02:13.505994 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:13.506212 kubelet[2386]: I0117 00:02:13.506194 2386 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:02:13.510770 kubelet[2386]: I0117 00:02:13.510735 2386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:02:13.511036 kubelet[2386]: I0117 00:02:13.511016 2386 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:02:13.516753 kubelet[2386]: I0117 00:02:13.516716 2386 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:02:13.517620 kubelet[2386]: I0117 00:02:13.517597 2386 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:02:13.517870 kubelet[2386]: I0117 00:02:13.517839 2386 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:13.524863 kubelet[2386]: E0117 00:02:13.519777 2386 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.23.180.188b5bba16875b5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.23.180,UID:172.31.23.180,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.23.180,},FirstTimestamp:2026-01-17 00:02:13.473065823 +0000 UTC m=+1.051541419,LastTimestamp:2026-01-17 00:02:13.473065823 +0000 UTC m=+1.051541419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.180,}" Jan 17 00:02:13.525555 kubelet[2386]: E0117 00:02:13.525484 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:02:13.525962 kubelet[2386]: E0117 00:02:13.525925 2386 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.23.180\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:02:13.557557 kubelet[2386]: I0117 00:02:13.557464 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:13.557557 kubelet[2386]: I0117 00:02:13.557497 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:13.557775 kubelet[2386]: I0117 00:02:13.557638 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:13.560420 kubelet[2386]: I0117 00:02:13.560361 2386 policy_none.go:49] "None policy: Start" Jan 17 00:02:13.560420 kubelet[2386]: I0117 00:02:13.560413 2386 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:02:13.560623 kubelet[2386]: I0117 00:02:13.560441 2386 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:02:13.561068 kubelet[2386]: E0117 00:02:13.561010 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.180\" not found" node="172.31.23.180" Jan 17 00:02:13.574741 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:02:13.590971 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:02:13.598204 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:02:13.606836 kubelet[2386]: E0117 00:02:13.606781 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:13.610040 kubelet[2386]: E0117 00:02:13.608781 2386 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:02:13.610040 kubelet[2386]: I0117 00:02:13.609093 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:13.610040 kubelet[2386]: I0117 00:02:13.609111 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:13.611639 kubelet[2386]: I0117 00:02:13.611602 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:13.614708 kubelet[2386]: E0117 00:02:13.614671 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:13.615122 kubelet[2386]: E0117 00:02:13.615037 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.180\" not found" Jan 17 00:02:13.683450 kubelet[2386]: I0117 00:02:13.683043 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:13.686340 kubelet[2386]: I0117 00:02:13.686302 2386 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:13.687376 kubelet[2386]: I0117 00:02:13.686523 2386 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:02:13.687376 kubelet[2386]: I0117 00:02:13.686913 2386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:13.687376 kubelet[2386]: I0117 00:02:13.686931 2386 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:02:13.687376 kubelet[2386]: E0117 00:02:13.686997 2386 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 00:02:13.711677 kubelet[2386]: I0117 00:02:13.711645 2386 kubelet_node_status.go:75] "Attempting to register node" node="172.31.23.180" Jan 17 00:02:13.721955 kubelet[2386]: I0117 00:02:13.721922 2386 kubelet_node_status.go:78] "Successfully registered node" node="172.31.23.180" Jan 17 00:02:13.722220 kubelet[2386]: E0117 00:02:13.722128 2386 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.23.180\": node \"172.31.23.180\" not found" Jan 17 00:02:13.765072 kubelet[2386]: E0117 00:02:13.765026 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:13.865419 kubelet[2386]: E0117 00:02:13.865360 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:13.966100 kubelet[2386]: E0117 00:02:13.965947 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:13.971756 sudo[2246]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:14.055635 sshd[2243]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:14.060852 systemd[1]: sshd@6-172.31.23.180:22-68.220.241.50:49180.service: Deactivated successfully. Jan 17 00:02:14.064162 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:02:14.068828 kubelet[2386]: E0117 00:02:14.068726 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.068749 systemd-logind[1910]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:02:14.071328 systemd-logind[1910]: Removed session 7. Jan 17 00:02:14.169114 kubelet[2386]: E0117 00:02:14.169059 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.270008 kubelet[2386]: E0117 00:02:14.269861 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.370726 kubelet[2386]: E0117 00:02:14.370678 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.393953 kubelet[2386]: I0117 00:02:14.393901 2386 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 00:02:14.394204 kubelet[2386]: I0117 00:02:14.394113 2386 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 17 00:02:14.394204 kubelet[2386]: I0117 00:02:14.394113 2386 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 17 00:02:14.466010 kubelet[2386]: E0117 00:02:14.465939 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:14.471462 kubelet[2386]: E0117 00:02:14.471423 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.572187 kubelet[2386]: E0117 00:02:14.572051 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.23.180\" not found" Jan 17 00:02:14.673942 kubelet[2386]: I0117 00:02:14.673902 2386 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 00:02:14.674677 containerd[1933]: time="2026-01-17T00:02:14.674553564Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:02:14.676001 kubelet[2386]: I0117 00:02:14.675680 2386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 00:02:15.466424 kubelet[2386]: E0117 00:02:15.466364 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:15.466424 kubelet[2386]: I0117 00:02:15.466378 2386 apiserver.go:52] "Watching apiserver" Jan 17 00:02:15.483284 kubelet[2386]: E0117 00:02:15.482756 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:15.502394 systemd[1]: Created slice kubepods-besteffort-poda69425d5_cabf_49e2_adc7_0fe767e12855.slice - libcontainer container kubepods-besteffort-poda69425d5_cabf_49e2_adc7_0fe767e12855.slice. Jan 17 00:02:15.513327 kubelet[2386]: I0117 00:02:15.512394 2386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:02:15.523692 kubelet[2386]: I0117 00:02:15.523651 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-var-run-calico\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524137 kubelet[2386]: I0117 00:02:15.523938 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-cni-log-dir\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524137 kubelet[2386]: I0117 00:02:15.524019 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-flexvol-driver-host\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524467 kubelet[2386]: I0117 00:02:15.524114 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-lib-modules\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524467 kubelet[2386]: I0117 00:02:15.524356 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a69425d5-cabf-49e2-adc7-0fe767e12855-node-certs\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524467 kubelet[2386]: I0117 00:02:15.524423 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-var-lib-calico\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.524886 kubelet[2386]: I0117 00:02:15.524711 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c8a3b18-b170-4670-a5f2-08284d1de243-varrun\") pod \"csi-node-driver-n79dp\" (UID: \"9c8a3b18-b170-4670-a5f2-08284d1de243\") " pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:15.524886 kubelet[2386]: I0117 00:02:15.524785 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztz5\" (UniqueName: \"kubernetes.io/projected/932842cb-786c-41e2-a7bb-1e51928cd86d-kube-api-access-pztz5\") pod \"kube-proxy-ftpp9\" (UID: \"932842cb-786c-41e2-a7bb-1e51928cd86d\") " pod="kube-system/kube-proxy-ftpp9" Jan 17 00:02:15.524886 kubelet[2386]: I0117 00:02:15.524827 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c8a3b18-b170-4670-a5f2-08284d1de243-socket-dir\") pod \"csi-node-driver-n79dp\" (UID: \"9c8a3b18-b170-4670-a5f2-08284d1de243\") " pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:15.525146 kubelet[2386]: I0117 00:02:15.524937 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7rz\" (UniqueName: \"kubernetes.io/projected/9c8a3b18-b170-4670-a5f2-08284d1de243-kube-api-access-rm7rz\") pod \"csi-node-driver-n79dp\" (UID: \"9c8a3b18-b170-4670-a5f2-08284d1de243\") " pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:15.525146 kubelet[2386]: I0117 00:02:15.524982 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/932842cb-786c-41e2-a7bb-1e51928cd86d-kube-proxy\") pod \"kube-proxy-ftpp9\" (UID: \"932842cb-786c-41e2-a7bb-1e51928cd86d\") " pod="kube-system/kube-proxy-ftpp9" Jan 17 00:02:15.525146 kubelet[2386]: I0117 00:02:15.525022 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/932842cb-786c-41e2-a7bb-1e51928cd86d-xtables-lock\") pod \"kube-proxy-ftpp9\" (UID: \"932842cb-786c-41e2-a7bb-1e51928cd86d\") " pod="kube-system/kube-proxy-ftpp9" Jan 17 00:02:15.525146 kubelet[2386]: I0117 00:02:15.525057 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-cni-bin-dir\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.525146 kubelet[2386]: I0117 00:02:15.525096 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-policysync\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.525391 kubelet[2386]: I0117 00:02:15.525130 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a69425d5-cabf-49e2-adc7-0fe767e12855-tigera-ca-bundle\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.525391 kubelet[2386]: I0117 00:02:15.525163 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/932842cb-786c-41e2-a7bb-1e51928cd86d-lib-modules\") pod \"kube-proxy-ftpp9\" (UID: \"932842cb-786c-41e2-a7bb-1e51928cd86d\") " pod="kube-system/kube-proxy-ftpp9" Jan 17 00:02:15.525391 kubelet[2386]: I0117 00:02:15.525195 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-cni-net-dir\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.525391 kubelet[2386]: I0117 00:02:15.525237 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a69425d5-cabf-49e2-adc7-0fe767e12855-xtables-lock\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.525391 kubelet[2386]: I0117 00:02:15.525272 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkm7d\" (UniqueName: \"kubernetes.io/projected/a69425d5-cabf-49e2-adc7-0fe767e12855-kube-api-access-nkm7d\") pod \"calico-node-ms9zg\" (UID: \"a69425d5-cabf-49e2-adc7-0fe767e12855\") " pod="calico-system/calico-node-ms9zg" Jan 17 00:02:15.527063 kubelet[2386]: I0117 00:02:15.525306 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c8a3b18-b170-4670-a5f2-08284d1de243-kubelet-dir\") pod \"csi-node-driver-n79dp\" (UID: \"9c8a3b18-b170-4670-a5f2-08284d1de243\") " pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:15.527063 kubelet[2386]: I0117 00:02:15.525340 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c8a3b18-b170-4670-a5f2-08284d1de243-registration-dir\") pod \"csi-node-driver-n79dp\" (UID: \"9c8a3b18-b170-4670-a5f2-08284d1de243\") " pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:15.534264 systemd[1]: Created slice kubepods-besteffort-pod932842cb_786c_41e2_a7bb_1e51928cd86d.slice - libcontainer container kubepods-besteffort-pod932842cb_786c_41e2_a7bb_1e51928cd86d.slice. Jan 17 00:02:15.629262 kubelet[2386]: E0117 00:02:15.629218 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.629987 kubelet[2386]: W0117 00:02:15.629912 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.630201 kubelet[2386]: E0117 00:02:15.629953 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.630732 kubelet[2386]: E0117 00:02:15.630692 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.630932 kubelet[2386]: W0117 00:02:15.630856 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.630932 kubelet[2386]: E0117 00:02:15.630889 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.631469 kubelet[2386]: E0117 00:02:15.631434 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.631624 kubelet[2386]: W0117 00:02:15.631479 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.631624 kubelet[2386]: E0117 00:02:15.631507 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.632393 kubelet[2386]: E0117 00:02:15.632206 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.632393 kubelet[2386]: W0117 00:02:15.632236 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.632393 kubelet[2386]: E0117 00:02:15.632302 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.633436 kubelet[2386]: E0117 00:02:15.633263 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.633436 kubelet[2386]: W0117 00:02:15.633296 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.633436 kubelet[2386]: E0117 00:02:15.633349 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.634300 kubelet[2386]: E0117 00:02:15.634129 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.634300 kubelet[2386]: W0117 00:02:15.634158 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.634300 kubelet[2386]: E0117 00:02:15.634184 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.635303 kubelet[2386]: E0117 00:02:15.634979 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.635517 kubelet[2386]: W0117 00:02:15.635388 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.635517 kubelet[2386]: E0117 00:02:15.635425 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.636219 kubelet[2386]: E0117 00:02:15.636098 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.636219 kubelet[2386]: W0117 00:02:15.636151 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.636219 kubelet[2386]: E0117 00:02:15.636180 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.637240 kubelet[2386]: E0117 00:02:15.636987 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.637240 kubelet[2386]: W0117 00:02:15.637014 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.637240 kubelet[2386]: E0117 00:02:15.637039 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.638000 kubelet[2386]: E0117 00:02:15.637734 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.638000 kubelet[2386]: W0117 00:02:15.637782 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.638000 kubelet[2386]: E0117 00:02:15.637809 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.638745 kubelet[2386]: E0117 00:02:15.638717 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.639056 kubelet[2386]: W0117 00:02:15.638851 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.639056 kubelet[2386]: E0117 00:02:15.638885 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.639422 kubelet[2386]: E0117 00:02:15.639397 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.639587 kubelet[2386]: W0117 00:02:15.639562 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.639696 kubelet[2386]: E0117 00:02:15.639673 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.641884 kubelet[2386]: E0117 00:02:15.641676 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.641884 kubelet[2386]: W0117 00:02:15.641704 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.641884 kubelet[2386]: E0117 00:02:15.641733 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.642520 kubelet[2386]: E0117 00:02:15.642285 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.642520 kubelet[2386]: W0117 00:02:15.642306 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.642520 kubelet[2386]: E0117 00:02:15.642325 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.645586 kubelet[2386]: E0117 00:02:15.645289 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.645586 kubelet[2386]: W0117 00:02:15.645320 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.645586 kubelet[2386]: E0117 00:02:15.645352 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.649135 kubelet[2386]: E0117 00:02:15.648808 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.649135 kubelet[2386]: W0117 00:02:15.648843 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.649135 kubelet[2386]: E0117 00:02:15.648878 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.650773 kubelet[2386]: E0117 00:02:15.650031 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.650773 kubelet[2386]: W0117 00:02:15.650060 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.650773 kubelet[2386]: E0117 00:02:15.650089 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.652742 kubelet[2386]: E0117 00:02:15.652706 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.652929 kubelet[2386]: W0117 00:02:15.652901 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.653045 kubelet[2386]: E0117 00:02:15.653021 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.657416 kubelet[2386]: E0117 00:02:15.657370 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.657416 kubelet[2386]: W0117 00:02:15.657404 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.657631 kubelet[2386]: E0117 00:02:15.657438 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.662592 kubelet[2386]: E0117 00:02:15.661721 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.662592 kubelet[2386]: W0117 00:02:15.661760 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.662592 kubelet[2386]: E0117 00:02:15.661790 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.669505 kubelet[2386]: E0117 00:02:15.669466 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.670692 kubelet[2386]: W0117 00:02:15.670639 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.670847 kubelet[2386]: E0117 00:02:15.670821 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.678216 kubelet[2386]: E0117 00:02:15.678183 2386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:15.679487 kubelet[2386]: W0117 00:02:15.679434 2386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:15.679604 kubelet[2386]: E0117 00:02:15.679500 2386 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:15.830199 containerd[1933]: time="2026-01-17T00:02:15.829996130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ms9zg,Uid:a69425d5-cabf-49e2-adc7-0fe767e12855,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:15.840697 containerd[1933]: time="2026-01-17T00:02:15.840611618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftpp9,Uid:932842cb-786c-41e2-a7bb-1e51928cd86d,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:16.434382 containerd[1933]: time="2026-01-17T00:02:16.434010925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:16.439265 containerd[1933]: time="2026-01-17T00:02:16.439194793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:02:16.440960 containerd[1933]: time="2026-01-17T00:02:16.440893525Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:16.444566 containerd[1933]: time="2026-01-17T00:02:16.443512429Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:16.445174 containerd[1933]: time="2026-01-17T00:02:16.445115017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:16.451572 containerd[1933]: time="2026-01-17T00:02:16.451480129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:16.453679 containerd[1933]: time="2026-01-17T00:02:16.453632401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 612.910671ms" Jan 17 00:02:16.455783 containerd[1933]: time="2026-01-17T00:02:16.455710861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 625.591479ms" Jan 17 00:02:16.466955 kubelet[2386]: E0117 00:02:16.466899 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:16.641675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504873208.mount: Deactivated successfully. Jan 17 00:02:16.739710 containerd[1933]: time="2026-01-17T00:02:16.739062831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:16.739710 containerd[1933]: time="2026-01-17T00:02:16.739202343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:16.739710 containerd[1933]: time="2026-01-17T00:02:16.739270623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.739710 containerd[1933]: time="2026-01-17T00:02:16.739475523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.741279 containerd[1933]: time="2026-01-17T00:02:16.739955559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:16.741279 containerd[1933]: time="2026-01-17T00:02:16.740044167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:16.741279 containerd[1933]: time="2026-01-17T00:02:16.740120631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.741279 containerd[1933]: time="2026-01-17T00:02:16.740667339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.915876 systemd[1]: Started cri-containerd-4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200.scope - libcontainer container 4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200. Jan 17 00:02:16.920228 systemd[1]: Started cri-containerd-8543cde034c5692fa75f940e4f579a159766f89d5307e70e135259c8c26037aa.scope - libcontainer container 8543cde034c5692fa75f940e4f579a159766f89d5307e70e135259c8c26037aa. Jan 17 00:02:16.990231 containerd[1933]: time="2026-01-17T00:02:16.989716000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ms9zg,Uid:a69425d5-cabf-49e2-adc7-0fe767e12855,Namespace:calico-system,Attempt:0,} returns sandbox id \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\"" Jan 17 00:02:16.998234 containerd[1933]: time="2026-01-17T00:02:16.998183140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:02:17.008078 containerd[1933]: time="2026-01-17T00:02:17.007987176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftpp9,Uid:932842cb-786c-41e2-a7bb-1e51928cd86d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8543cde034c5692fa75f940e4f579a159766f89d5307e70e135259c8c26037aa\"" Jan 17 00:02:17.467337 kubelet[2386]: E0117 00:02:17.467011 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:17.689910 kubelet[2386]: E0117 00:02:17.689840 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:18.192043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103747396.mount: Deactivated successfully. Jan 17 00:02:18.305918 containerd[1933]: time="2026-01-17T00:02:18.305849715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.308941 containerd[1933]: time="2026-01-17T00:02:18.308896347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 17 00:02:18.311411 containerd[1933]: time="2026-01-17T00:02:18.311369991Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.317563 containerd[1933]: time="2026-01-17T00:02:18.316142715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.317770 containerd[1933]: time="2026-01-17T00:02:18.317725107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.319120107s" Jan 17 00:02:18.317885 containerd[1933]: time="2026-01-17T00:02:18.317855475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:02:18.320917 containerd[1933]: time="2026-01-17T00:02:18.320871183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:02:18.327789 containerd[1933]: time="2026-01-17T00:02:18.327729855Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:02:18.368224 containerd[1933]: time="2026-01-17T00:02:18.368145771Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece\"" Jan 17 00:02:18.370244 containerd[1933]: time="2026-01-17T00:02:18.369705507Z" level=info msg="StartContainer for \"e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece\"" Jan 17 00:02:18.440840 systemd[1]: Started cri-containerd-e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece.scope - libcontainer container e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece. Jan 17 00:02:18.468100 kubelet[2386]: E0117 00:02:18.467950 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:18.496863 containerd[1933]: time="2026-01-17T00:02:18.496670211Z" level=info msg="StartContainer for \"e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece\" returns successfully" Jan 17 00:02:18.529153 systemd[1]: cri-containerd-e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece.scope: Deactivated successfully. Jan 17 00:02:18.607879 containerd[1933]: time="2026-01-17T00:02:18.607780216Z" level=info msg="shim disconnected" id=e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece namespace=k8s.io Jan 17 00:02:18.607879 containerd[1933]: time="2026-01-17T00:02:18.607858180Z" level=warning msg="cleaning up after shim disconnected" id=e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece namespace=k8s.io Jan 17 00:02:18.607879 containerd[1933]: time="2026-01-17T00:02:18.607880380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:19.154086 systemd[1]: run-containerd-runc-k8s.io-e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece-runc.59qJLx.mount: Deactivated successfully. Jan 17 00:02:19.154807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b29bb4079f6cf3ce078bd37b5e49a51a49b5b310441e78c4093689770fcece-rootfs.mount: Deactivated successfully. Jan 17 00:02:19.470262 kubelet[2386]: E0117 00:02:19.470105 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:19.672376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546466936.mount: Deactivated successfully. Jan 17 00:02:19.690039 kubelet[2386]: E0117 00:02:19.689986 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:20.331830 containerd[1933]: time="2026-01-17T00:02:20.331752089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.333621 containerd[1933]: time="2026-01-17T00:02:20.333553901Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 17 00:02:20.336068 containerd[1933]: time="2026-01-17T00:02:20.335996813Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.342400 containerd[1933]: time="2026-01-17T00:02:20.340521989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.342835 containerd[1933]: time="2026-01-17T00:02:20.342764129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 2.021683234s" Jan 17 00:02:20.342951 containerd[1933]: time="2026-01-17T00:02:20.342826649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 17 00:02:20.344260 containerd[1933]: time="2026-01-17T00:02:20.344192141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:02:20.350999 containerd[1933]: time="2026-01-17T00:02:20.350948105Z" level=info msg="CreateContainer within sandbox \"8543cde034c5692fa75f940e4f579a159766f89d5307e70e135259c8c26037aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:02:20.400252 containerd[1933]: time="2026-01-17T00:02:20.400168553Z" level=info msg="CreateContainer within sandbox \"8543cde034c5692fa75f940e4f579a159766f89d5307e70e135259c8c26037aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb6655ce52ce7579de74f104f8bc162b9a3f3d0607d4728ccd0c1f51114649bf\"" Jan 17 00:02:20.401626 containerd[1933]: time="2026-01-17T00:02:20.401556989Z" level=info msg="StartContainer for \"cb6655ce52ce7579de74f104f8bc162b9a3f3d0607d4728ccd0c1f51114649bf\"" Jan 17 00:02:20.451856 systemd[1]: Started cri-containerd-cb6655ce52ce7579de74f104f8bc162b9a3f3d0607d4728ccd0c1f51114649bf.scope - libcontainer container cb6655ce52ce7579de74f104f8bc162b9a3f3d0607d4728ccd0c1f51114649bf. Jan 17 00:02:20.471164 kubelet[2386]: E0117 00:02:20.471060 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:20.509179 containerd[1933]: time="2026-01-17T00:02:20.509007521Z" level=info msg="StartContainer for \"cb6655ce52ce7579de74f104f8bc162b9a3f3d0607d4728ccd0c1f51114649bf\" returns successfully" Jan 17 00:02:21.471231 kubelet[2386]: E0117 00:02:21.471166 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:21.687708 kubelet[2386]: E0117 00:02:21.687456 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:22.472195 kubelet[2386]: E0117 00:02:22.472150 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:23.207710 containerd[1933]: time="2026-01-17T00:02:23.207633199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.209904 containerd[1933]: time="2026-01-17T00:02:23.209840695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:02:23.214580 containerd[1933]: time="2026-01-17T00:02:23.213281071Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.219988 containerd[1933]: time="2026-01-17T00:02:23.219930007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.221608 containerd[1933]: time="2026-01-17T00:02:23.221524483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.877268094s" Jan 17 00:02:23.221743 containerd[1933]: time="2026-01-17T00:02:23.221608711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:02:23.227522 containerd[1933]: time="2026-01-17T00:02:23.227466739Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:02:23.245643 containerd[1933]: time="2026-01-17T00:02:23.245582107Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004\"" Jan 17 00:02:23.248603 containerd[1933]: time="2026-01-17T00:02:23.248553139Z" level=info msg="StartContainer for \"7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004\"" Jan 17 00:02:23.319858 systemd[1]: Started cri-containerd-7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004.scope - libcontainer container 7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004. Jan 17 00:02:23.371247 containerd[1933]: time="2026-01-17T00:02:23.371086088Z" level=info msg="StartContainer for \"7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004\" returns successfully" Jan 17 00:02:23.473481 kubelet[2386]: E0117 00:02:23.473328 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:23.688485 kubelet[2386]: E0117 00:02:23.688232 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:23.784466 kubelet[2386]: I0117 00:02:23.784266 2386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ftpp9" podStartSLOduration=7.453231509 podStartE2EDuration="10.78423217s" podCreationTimestamp="2026-01-17 00:02:13 +0000 UTC" firstStartedPulling="2026-01-17 00:02:17.0130203 +0000 UTC m=+4.591495860" lastFinishedPulling="2026-01-17 00:02:20.344020961 +0000 UTC m=+7.922496521" observedRunningTime="2026-01-17 00:02:20.755936731 +0000 UTC m=+8.334412327" watchObservedRunningTime="2026-01-17 00:02:23.78423217 +0000 UTC m=+11.362707730" Jan 17 00:02:24.473635 kubelet[2386]: E0117 00:02:24.473509 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:25.220151 containerd[1933]: time="2026-01-17T00:02:25.220078869Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:02:25.224147 systemd[1]: cri-containerd-7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004.scope: Deactivated successfully. Jan 17 00:02:25.235699 kubelet[2386]: I0117 00:02:25.234581 2386 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:02:25.267653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004-rootfs.mount: Deactivated successfully. Jan 17 00:02:25.474067 kubelet[2386]: E0117 00:02:25.473882 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:25.698921 systemd[1]: Created slice kubepods-besteffort-pod9c8a3b18_b170_4670_a5f2_08284d1de243.slice - libcontainer container kubepods-besteffort-pod9c8a3b18_b170_4670_a5f2_08284d1de243.slice. Jan 17 00:02:25.703830 containerd[1933]: time="2026-01-17T00:02:25.703730771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n79dp,Uid:9c8a3b18-b170-4670-a5f2-08284d1de243,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:26.256741 containerd[1933]: time="2026-01-17T00:02:26.256202206Z" level=info msg="shim disconnected" id=7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004 namespace=k8s.io Jan 17 00:02:26.256741 containerd[1933]: time="2026-01-17T00:02:26.256305766Z" level=warning msg="cleaning up after shim disconnected" id=7810ab657f808ba5920769db61090a0c78fdc3cc2cfd9cffced38320e4f98004 namespace=k8s.io Jan 17 00:02:26.256741 containerd[1933]: time="2026-01-17T00:02:26.256328014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:26.383112 containerd[1933]: time="2026-01-17T00:02:26.383019875Z" level=error msg="Failed to destroy network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:26.385615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc-shm.mount: Deactivated successfully. Jan 17 00:02:26.387688 containerd[1933]: time="2026-01-17T00:02:26.386482931Z" level=error msg="encountered an error cleaning up failed sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:26.387688 containerd[1933]: time="2026-01-17T00:02:26.386603291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n79dp,Uid:9c8a3b18-b170-4670-a5f2-08284d1de243,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:26.388008 kubelet[2386]: E0117 00:02:26.387920 2386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:26.388107 kubelet[2386]: E0117 00:02:26.388034 2386 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:26.388107 kubelet[2386]: E0117 00:02:26.388072 2386 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n79dp" Jan 17 00:02:26.388234 kubelet[2386]: E0117 00:02:26.388159 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:26.474728 kubelet[2386]: E0117 00:02:26.474667 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:26.751693 containerd[1933]: time="2026-01-17T00:02:26.751476240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:02:26.752813 kubelet[2386]: I0117 00:02:26.752754 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:26.754023 containerd[1933]: time="2026-01-17T00:02:26.753972324Z" level=info msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" Jan 17 00:02:26.754615 containerd[1933]: time="2026-01-17T00:02:26.754401888Z" level=info msg="Ensure that sandbox 3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc in task-service has been cleanup successfully" Jan 17 00:02:26.799771 containerd[1933]: time="2026-01-17T00:02:26.799606129Z" level=error msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" failed" error="failed to destroy network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:26.800039 kubelet[2386]: E0117 00:02:26.799908 2386 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:26.800039 kubelet[2386]: E0117 00:02:26.799982 2386 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc"} Jan 17 00:02:26.800258 kubelet[2386]: E0117 00:02:26.800066 2386 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c8a3b18-b170-4670-a5f2-08284d1de243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:26.800258 kubelet[2386]: E0117 00:02:26.800105 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c8a3b18-b170-4670-a5f2-08284d1de243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:27.475151 kubelet[2386]: E0117 00:02:27.475090 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:28.476117 kubelet[2386]: E0117 00:02:28.476029 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:29.021078 systemd[1]: Created slice kubepods-besteffort-poda293579e_6316_406d_98a2_565c24e14f2c.slice - libcontainer container kubepods-besteffort-poda293579e_6316_406d_98a2_565c24e14f2c.slice. Jan 17 00:02:29.024619 kubelet[2386]: I0117 00:02:29.023052 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nsv9\" (UniqueName: \"kubernetes.io/projected/a293579e-6316-406d-98a2-565c24e14f2c-kube-api-access-5nsv9\") pod \"nginx-deployment-7fcdb87857-4ztbm\" (UID: \"a293579e-6316-406d-98a2-565c24e14f2c\") " pod="default/nginx-deployment-7fcdb87857-4ztbm" Jan 17 00:02:29.328718 containerd[1933]: time="2026-01-17T00:02:29.328645885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4ztbm,Uid:a293579e-6316-406d-98a2-565c24e14f2c,Namespace:default,Attempt:0,}" Jan 17 00:02:29.470307 containerd[1933]: time="2026-01-17T00:02:29.470220326Z" level=error msg="Failed to destroy network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:29.473953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171-shm.mount: Deactivated successfully. Jan 17 00:02:29.474376 containerd[1933]: time="2026-01-17T00:02:29.473981630Z" level=error msg="encountered an error cleaning up failed sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:29.474376 containerd[1933]: time="2026-01-17T00:02:29.474088910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4ztbm,Uid:a293579e-6316-406d-98a2-565c24e14f2c,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:29.474579 kubelet[2386]: E0117 00:02:29.474379 2386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:29.474579 kubelet[2386]: E0117 00:02:29.474453 2386 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4ztbm" Jan 17 00:02:29.474579 kubelet[2386]: E0117 00:02:29.474487 2386 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-4ztbm" Jan 17 00:02:29.475891 kubelet[2386]: E0117 00:02:29.475012 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-4ztbm_default(a293579e-6316-406d-98a2-565c24e14f2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-4ztbm_default(a293579e-6316-406d-98a2-565c24e14f2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4ztbm" podUID="a293579e-6316-406d-98a2-565c24e14f2c" Jan 17 00:02:29.476770 kubelet[2386]: E0117 00:02:29.476722 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:29.764988 kubelet[2386]: I0117 00:02:29.763754 2386 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:29.765810 containerd[1933]: time="2026-01-17T00:02:29.765603519Z" level=info msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" Jan 17 00:02:29.766627 containerd[1933]: time="2026-01-17T00:02:29.766091043Z" level=info msg="Ensure that sandbox a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171 in task-service has been cleanup successfully" Jan 17 00:02:29.830268 containerd[1933]: time="2026-01-17T00:02:29.830075260Z" level=error msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" failed" error="failed to destroy network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:29.830959 kubelet[2386]: E0117 00:02:29.830899 2386 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:29.831081 kubelet[2386]: E0117 00:02:29.830977 2386 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171"} Jan 17 00:02:29.831081 kubelet[2386]: E0117 00:02:29.831031 2386 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a293579e-6316-406d-98a2-565c24e14f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:29.831262 kubelet[2386]: E0117 00:02:29.831071 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a293579e-6316-406d-98a2-565c24e14f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-4ztbm" podUID="a293579e-6316-406d-98a2-565c24e14f2c" Jan 17 00:02:30.478563 kubelet[2386]: E0117 00:02:30.477477 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:31.478321 kubelet[2386]: E0117 00:02:31.478276 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:31.810071 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:02:32.439340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617252161.mount: Deactivated successfully. Jan 17 00:02:32.479597 kubelet[2386]: E0117 00:02:32.479486 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:32.495612 containerd[1933]: time="2026-01-17T00:02:32.494667965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:32.498729 containerd[1933]: time="2026-01-17T00:02:32.498460061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:02:32.501564 containerd[1933]: time="2026-01-17T00:02:32.501140969Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:32.507331 containerd[1933]: time="2026-01-17T00:02:32.505964561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:32.507331 containerd[1933]: time="2026-01-17T00:02:32.507160745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.755494449s" Jan 17 00:02:32.507331 containerd[1933]: time="2026-01-17T00:02:32.507202781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:02:32.533459 containerd[1933]: time="2026-01-17T00:02:32.533397989Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:02:32.566452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683706073.mount: Deactivated successfully. Jan 17 00:02:32.571592 containerd[1933]: time="2026-01-17T00:02:32.571480001Z" level=info msg="CreateContainer within sandbox \"4588e90335dae89aabee686be77ba759bcf754395c86b431f10c95b813d95200\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"38e34546abd6012f0bb0318da96a916cf7dfcf039bd4b814e7f9a477234a93f0\"" Jan 17 00:02:32.574616 containerd[1933]: time="2026-01-17T00:02:32.572590109Z" level=info msg="StartContainer for \"38e34546abd6012f0bb0318da96a916cf7dfcf039bd4b814e7f9a477234a93f0\"" Jan 17 00:02:32.619863 systemd[1]: Started cri-containerd-38e34546abd6012f0bb0318da96a916cf7dfcf039bd4b814e7f9a477234a93f0.scope - libcontainer container 38e34546abd6012f0bb0318da96a916cf7dfcf039bd4b814e7f9a477234a93f0. Jan 17 00:02:32.675078 containerd[1933]: time="2026-01-17T00:02:32.674919642Z" level=info msg="StartContainer for \"38e34546abd6012f0bb0318da96a916cf7dfcf039bd4b814e7f9a477234a93f0\" returns successfully" Jan 17 00:02:33.000072 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:02:33.000250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:02:33.461832 kubelet[2386]: E0117 00:02:33.461773 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:33.480605 kubelet[2386]: E0117 00:02:33.480572 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:34.481012 kubelet[2386]: E0117 00:02:34.480936 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:34.949584 kernel: bpftool[3196]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:02:35.247698 systemd-networkd[1823]: vxlan.calico: Link UP Jan 17 00:02:35.247719 systemd-networkd[1823]: vxlan.calico: Gained carrier Jan 17 00:02:35.248452 (udev-worker)[3028]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:35.306354 (udev-worker)[3029]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:35.481773 kubelet[2386]: E0117 00:02:35.481686 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:36.481882 kubelet[2386]: E0117 00:02:36.481818 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:37.058289 systemd-networkd[1823]: vxlan.calico: Gained IPv6LL Jan 17 00:02:37.482873 kubelet[2386]: E0117 00:02:37.482732 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:38.483288 kubelet[2386]: E0117 00:02:38.483226 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:39.483606 kubelet[2386]: E0117 00:02:39.483554 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:40.036803 ntpd[1906]: Listen normally on 8 vxlan.calico 192.168.19.0:123 Jan 17 00:02:40.036937 ntpd[1906]: Listen normally on 9 vxlan.calico [fe80::6423:d3ff:fe15:703%3]:123 Jan 17 00:02:40.037415 ntpd[1906]: 17 Jan 00:02:40 ntpd[1906]: Listen normally on 8 vxlan.calico 192.168.19.0:123 Jan 17 00:02:40.037415 ntpd[1906]: 17 Jan 00:02:40 ntpd[1906]: Listen normally on 9 vxlan.calico [fe80::6423:d3ff:fe15:703%3]:123 Jan 17 00:02:40.484134 kubelet[2386]: E0117 00:02:40.484075 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:41.485003 kubelet[2386]: E0117 00:02:41.484938 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:41.690705 containerd[1933]: time="2026-01-17T00:02:41.689497971Z" level=info msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" Jan 17 00:02:41.774706 kubelet[2386]: I0117 00:02:41.774406 2386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ms9zg" podStartSLOduration=13.260043346 podStartE2EDuration="28.774382971s" podCreationTimestamp="2026-01-17 00:02:13 +0000 UTC" firstStartedPulling="2026-01-17 00:02:16.994350412 +0000 UTC m=+4.572825972" lastFinishedPulling="2026-01-17 00:02:32.508690037 +0000 UTC m=+20.087165597" observedRunningTime="2026-01-17 00:02:32.809231755 +0000 UTC m=+20.387707327" watchObservedRunningTime="2026-01-17 00:02:41.774382971 +0000 UTC m=+29.352858555" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.774 [INFO][3288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.775 [INFO][3288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" iface="eth0" netns="/var/run/netns/cni-af7556fa-8267-a36d-a3b7-4a5d805115cf" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.775 [INFO][3288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" iface="eth0" netns="/var/run/netns/cni-af7556fa-8267-a36d-a3b7-4a5d805115cf" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.778 [INFO][3288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" iface="eth0" netns="/var/run/netns/cni-af7556fa-8267-a36d-a3b7-4a5d805115cf" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.778 [INFO][3288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.778 [INFO][3288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.854 [INFO][3296] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.854 [INFO][3296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.854 [INFO][3296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.871 [WARNING][3296] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.871 [INFO][3296] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.874 [INFO][3296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:41.882524 containerd[1933]: 2026-01-17 00:02:41.879 [INFO][3288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:02:41.883470 containerd[1933]: time="2026-01-17T00:02:41.882931684Z" level=info msg="TearDown network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" successfully" Jan 17 00:02:41.883470 containerd[1933]: time="2026-01-17T00:02:41.882996172Z" level=info msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" returns successfully" Jan 17 00:02:41.886515 containerd[1933]: time="2026-01-17T00:02:41.886383796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n79dp,Uid:9c8a3b18-b170-4670-a5f2-08284d1de243,Namespace:calico-system,Attempt:1,}" Jan 17 00:02:41.886716 systemd[1]: run-netns-cni\x2daf7556fa\x2d8267\x2da36d\x2da3b7\x2d4a5d805115cf.mount: Deactivated successfully. Jan 17 00:02:42.077352 systemd-networkd[1823]: calie4668e9419e: Link UP Jan 17 00:02:42.080400 systemd-networkd[1823]: calie4668e9419e: Gained carrier Jan 17 00:02:42.081164 (udev-worker)[3322]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:41.969 [INFO][3303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.180-k8s-csi--node--driver--n79dp-eth0 csi-node-driver- calico-system 9c8a3b18-b170-4670-a5f2-08284d1de243 1275 0 2026-01-17 00:02:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.23.180 csi-node-driver-n79dp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4668e9419e [] [] }} ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:41.970 [INFO][3303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.013 [INFO][3315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" HandleID="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.014 [INFO][3315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" HandleID="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.23.180", "pod":"csi-node-driver-n79dp", "timestamp":"2026-01-17 00:02:42.013775172 +0000 UTC"}, Hostname:"172.31.23.180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.014 [INFO][3315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.014 [INFO][3315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.014 [INFO][3315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.180' Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.027 [INFO][3315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.034 [INFO][3315] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.041 [INFO][3315] ipam/ipam.go 511: Trying affinity for 192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.044 [INFO][3315] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.048 [INFO][3315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.048 [INFO][3315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.051 [INFO][3315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8 Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.059 [INFO][3315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.067 [INFO][3315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.1/26] block=192.168.19.0/26 handle="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.067 [INFO][3315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.1/26] handle="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" host="172.31.23.180" Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.067 [INFO][3315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:42.113658 containerd[1933]: 2026-01-17 00:02:42.067 [INFO][3315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.1/26] IPv6=[] ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" HandleID="k8s-pod-network.996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.071 [INFO][3303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-csi--node--driver--n79dp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c8a3b18-b170-4670-a5f2-08284d1de243", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"", Pod:"csi-node-driver-n79dp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4668e9419e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.071 [INFO][3303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.1/32] ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.071 [INFO][3303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4668e9419e ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.082 [INFO][3303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.083 [INFO][3303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-csi--node--driver--n79dp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c8a3b18-b170-4670-a5f2-08284d1de243", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8", Pod:"csi-node-driver-n79dp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4668e9419e", MAC:"b2:2d:87:93:0e:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:42.115969 containerd[1933]: 2026-01-17 00:02:42.108 [INFO][3303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8" Namespace="calico-system" Pod="csi-node-driver-n79dp" WorkloadEndpoint="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:02:42.150925 containerd[1933]: time="2026-01-17T00:02:42.150758101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:42.150925 containerd[1933]: time="2026-01-17T00:02:42.150855301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:42.150925 containerd[1933]: time="2026-01-17T00:02:42.150912829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:42.151348 containerd[1933]: time="2026-01-17T00:02:42.151091305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:42.204856 systemd[1]: Started cri-containerd-996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8.scope - libcontainer container 996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8. Jan 17 00:02:42.245624 containerd[1933]: time="2026-01-17T00:02:42.245516641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n79dp,Uid:9c8a3b18-b170-4670-a5f2-08284d1de243,Namespace:calico-system,Attempt:1,} returns sandbox id \"996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8\"" Jan 17 00:02:42.249468 containerd[1933]: time="2026-01-17T00:02:42.249423013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:02:42.486064 kubelet[2386]: E0117 00:02:42.485890 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:42.508774 containerd[1933]: time="2026-01-17T00:02:42.508703103Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:42.510877 containerd[1933]: time="2026-01-17T00:02:42.510810315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:02:42.510877 containerd[1933]: time="2026-01-17T00:02:42.510849531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:02:42.511158 kubelet[2386]: E0117 00:02:42.511076 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:42.511301 kubelet[2386]: E0117 00:02:42.511177 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:42.511773 kubelet[2386]: E0117 00:02:42.511526 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:42.514112 containerd[1933]: time="2026-01-17T00:02:42.514058379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:02:42.808194 containerd[1933]: time="2026-01-17T00:02:42.808114288Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:42.810384 containerd[1933]: time="2026-01-17T00:02:42.810289540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:02:42.810499 containerd[1933]: time="2026-01-17T00:02:42.810424600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:02:42.811474 kubelet[2386]: E0117 00:02:42.810699 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:42.811474 kubelet[2386]: E0117 00:02:42.810760 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:42.811474 kubelet[2386]: E0117 00:02:42.810967 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:42.812957 kubelet[2386]: E0117 00:02:42.812876 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:43.266087 systemd-networkd[1823]: calie4668e9419e: Gained IPv6LL Jan 17 00:02:43.486998 kubelet[2386]: E0117 00:02:43.486922 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:43.811099 kubelet[2386]: E0117 00:02:43.811003 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:44.487761 kubelet[2386]: E0117 00:02:44.487700 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:44.689347 containerd[1933]: time="2026-01-17T00:02:44.689270130Z" level=info msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.757 [INFO][3388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.757 [INFO][3388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" iface="eth0" netns="/var/run/netns/cni-d98f0d9c-e2b6-3a7b-3e03-720a26281218" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.758 [INFO][3388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" iface="eth0" netns="/var/run/netns/cni-d98f0d9c-e2b6-3a7b-3e03-720a26281218" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.759 [INFO][3388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" iface="eth0" netns="/var/run/netns/cni-d98f0d9c-e2b6-3a7b-3e03-720a26281218" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.759 [INFO][3388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.759 [INFO][3388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.797 [INFO][3396] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.797 [INFO][3396] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.797 [INFO][3396] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.811 [WARNING][3396] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.811 [INFO][3396] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.813 [INFO][3396] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:44.818521 containerd[1933]: 2026-01-17 00:02:44.816 [INFO][3388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:02:44.821095 containerd[1933]: time="2026-01-17T00:02:44.821008326Z" level=info msg="TearDown network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" successfully" Jan 17 00:02:44.821095 containerd[1933]: time="2026-01-17T00:02:44.821068194Z" level=info msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" returns successfully" Jan 17 00:02:44.822476 systemd[1]: run-netns-cni\x2dd98f0d9c\x2de2b6\x2d3a7b\x2d3e03\x2d720a26281218.mount: Deactivated successfully. Jan 17 00:02:44.824090 containerd[1933]: time="2026-01-17T00:02:44.822137046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4ztbm,Uid:a293579e-6316-406d-98a2-565c24e14f2c,Namespace:default,Attempt:1,}" Jan 17 00:02:45.024554 systemd-networkd[1823]: cali7e056047c84: Link UP Jan 17 00:02:45.028041 systemd-networkd[1823]: cali7e056047c84: Gained carrier Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.918 [INFO][3403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0 nginx-deployment-7fcdb87857- default a293579e-6316-406d-98a2-565c24e14f2c 1302 0 2026-01-17 00:02:28 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.180 nginx-deployment-7fcdb87857-4ztbm eth0 default [] [] [kns.default ksa.default.default] cali7e056047c84 [] [] }} ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.918 [INFO][3403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.961 [INFO][3414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" HandleID="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.961 [INFO][3414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" HandleID="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa240), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.180", "pod":"nginx-deployment-7fcdb87857-4ztbm", "timestamp":"2026-01-17 00:02:44.961235311 +0000 UTC"}, Hostname:"172.31.23.180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.961 [INFO][3414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.961 [INFO][3414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.961 [INFO][3414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.180' Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.977 [INFO][3414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.983 [INFO][3414] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.990 [INFO][3414] ipam/ipam.go 511: Trying affinity for 192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.993 [INFO][3414] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.996 [INFO][3414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.996 [INFO][3414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:44.998 [INFO][3414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:45.003 [INFO][3414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:45.015 [INFO][3414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.2/26] block=192.168.19.0/26 handle="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:45.015 [INFO][3414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.2/26] handle="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" host="172.31.23.180" Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:45.015 [INFO][3414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:45.056760 containerd[1933]: 2026-01-17 00:02:45.015 [INFO][3414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.2/26] IPv6=[] ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" HandleID="k8s-pod-network.57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.018 [INFO][3403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a293579e-6316-406d-98a2-565c24e14f2c", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-4ztbm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7e056047c84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.018 [INFO][3403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.2/32] ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.018 [INFO][3403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e056047c84 ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.027 [INFO][3403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.033 [INFO][3403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a293579e-6316-406d-98a2-565c24e14f2c", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa", Pod:"nginx-deployment-7fcdb87857-4ztbm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7e056047c84", MAC:"5a:fc:4e:ff:ad:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:45.059512 containerd[1933]: 2026-01-17 00:02:45.051 [INFO][3403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa" Namespace="default" Pod="nginx-deployment-7fcdb87857-4ztbm" WorkloadEndpoint="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:02:45.097602 containerd[1933]: time="2026-01-17T00:02:45.095190232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:45.097602 containerd[1933]: time="2026-01-17T00:02:45.095278132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:45.097602 containerd[1933]: time="2026-01-17T00:02:45.095314924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:45.097602 containerd[1933]: time="2026-01-17T00:02:45.095476120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:45.143850 systemd[1]: Started cri-containerd-57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa.scope - libcontainer container 57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa. Jan 17 00:02:45.205661 containerd[1933]: time="2026-01-17T00:02:45.205603468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4ztbm,Uid:a293579e-6316-406d-98a2-565c24e14f2c,Namespace:default,Attempt:1,} returns sandbox id \"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa\"" Jan 17 00:02:45.207710 containerd[1933]: time="2026-01-17T00:02:45.207593704Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:02:45.488621 kubelet[2386]: E0117 00:02:45.488447 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:46.019824 update_engine[1911]: I20260117 00:02:46.019734 1911 update_attempter.cc:509] Updating boot flags... Jan 17 00:02:46.091678 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3487) Jan 17 00:02:46.403606 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3488) Jan 17 00:02:46.490570 kubelet[2386]: E0117 00:02:46.488932 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:46.978410 systemd-networkd[1823]: cali7e056047c84: Gained IPv6LL Jan 17 00:02:47.490146 kubelet[2386]: E0117 00:02:47.490089 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:48.446301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921633182.mount: Deactivated successfully. Jan 17 00:02:48.490715 kubelet[2386]: E0117 00:02:48.490622 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:49.036827 ntpd[1906]: Listen normally on 10 calie4668e9419e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:02:49.037527 ntpd[1906]: 17 Jan 00:02:49 ntpd[1906]: Listen normally on 10 calie4668e9419e [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:02:49.037527 ntpd[1906]: 17 Jan 00:02:49 ntpd[1906]: Listen normally on 11 cali7e056047c84 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:02:49.036933 ntpd[1906]: Listen normally on 11 cali7e056047c84 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:02:49.491286 kubelet[2386]: E0117 00:02:49.491224 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:49.765344 containerd[1933]: time="2026-01-17T00:02:49.764306639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:49.766790 containerd[1933]: time="2026-01-17T00:02:49.766727771Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=62401393" Jan 17 00:02:49.768687 containerd[1933]: time="2026-01-17T00:02:49.768626903Z" level=info msg="ImageCreate event name:\"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:49.778249 containerd[1933]: time="2026-01-17T00:02:49.776643743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:49.778914 containerd[1933]: time="2026-01-17T00:02:49.778864139Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"62401271\" in 4.571208395s" Jan 17 00:02:49.779064 containerd[1933]: time="2026-01-17T00:02:49.779034395Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\"" Jan 17 00:02:49.786877 containerd[1933]: time="2026-01-17T00:02:49.786815783Z" level=info msg="CreateContainer within sandbox \"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 00:02:49.818016 containerd[1933]: time="2026-01-17T00:02:49.817939319Z" level=info msg="CreateContainer within sandbox \"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a38421d5cf9f1e20d0a136f712639b928fd0810b3f8f82c9d8197617a4d1d455\"" Jan 17 00:02:49.819112 containerd[1933]: time="2026-01-17T00:02:49.818995451Z" level=info msg="StartContainer for \"a38421d5cf9f1e20d0a136f712639b928fd0810b3f8f82c9d8197617a4d1d455\"" Jan 17 00:02:49.881853 systemd[1]: Started cri-containerd-a38421d5cf9f1e20d0a136f712639b928fd0810b3f8f82c9d8197617a4d1d455.scope - libcontainer container a38421d5cf9f1e20d0a136f712639b928fd0810b3f8f82c9d8197617a4d1d455. Jan 17 00:02:49.929919 containerd[1933]: time="2026-01-17T00:02:49.929726916Z" level=info msg="StartContainer for \"a38421d5cf9f1e20d0a136f712639b928fd0810b3f8f82c9d8197617a4d1d455\" returns successfully" Jan 17 00:02:50.492306 kubelet[2386]: E0117 00:02:50.492252 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:51.493208 kubelet[2386]: E0117 00:02:51.493149 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:52.493394 kubelet[2386]: E0117 00:02:52.493347 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:53.462227 kubelet[2386]: E0117 00:02:53.462144 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:53.494804 kubelet[2386]: E0117 00:02:53.494741 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:54.495177 kubelet[2386]: E0117 00:02:54.495118 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:55.495615 kubelet[2386]: E0117 00:02:55.495564 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:56.495957 kubelet[2386]: E0117 00:02:56.495892 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:57.496481 kubelet[2386]: E0117 00:02:57.496410 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:57.690588 containerd[1933]: time="2026-01-17T00:02:57.690162438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:02:57.716583 kubelet[2386]: I0117 00:02:57.715710 2386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4ztbm" podStartSLOduration=25.141553319 podStartE2EDuration="29.715690626s" podCreationTimestamp="2026-01-17 00:02:28 +0000 UTC" firstStartedPulling="2026-01-17 00:02:45.207120616 +0000 UTC m=+32.785596188" lastFinishedPulling="2026-01-17 00:02:49.781257923 +0000 UTC m=+37.359733495" observedRunningTime="2026-01-17 00:02:50.865796472 +0000 UTC m=+38.444272032" watchObservedRunningTime="2026-01-17 00:02:57.715690626 +0000 UTC m=+45.294166198" Jan 17 00:02:57.893208 systemd[1]: Created slice kubepods-besteffort-podb1e35ba8_8440_42a2_87b9_411538f9db08.slice - libcontainer container kubepods-besteffort-podb1e35ba8_8440_42a2_87b9_411538f9db08.slice. Jan 17 00:02:57.913089 kubelet[2386]: I0117 00:02:57.913024 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b1e35ba8-8440-42a2-87b9-411538f9db08-data\") pod \"nfs-server-provisioner-0\" (UID: \"b1e35ba8-8440-42a2-87b9-411538f9db08\") " pod="default/nfs-server-provisioner-0" Jan 17 00:02:57.913089 kubelet[2386]: I0117 00:02:57.913093 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9kh\" (UniqueName: \"kubernetes.io/projected/b1e35ba8-8440-42a2-87b9-411538f9db08-kube-api-access-hm9kh\") pod \"nfs-server-provisioner-0\" (UID: \"b1e35ba8-8440-42a2-87b9-411538f9db08\") " pod="default/nfs-server-provisioner-0" Jan 17 00:02:57.968977 containerd[1933]: time="2026-01-17T00:02:57.968860820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:57.971018 containerd[1933]: time="2026-01-17T00:02:57.970880612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:02:57.971018 containerd[1933]: time="2026-01-17T00:02:57.970958900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:02:57.971214 kubelet[2386]: E0117 00:02:57.971141 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:57.971214 kubelet[2386]: E0117 00:02:57.971202 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:57.971928 kubelet[2386]: E0117 00:02:57.971370 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:57.973714 containerd[1933]: time="2026-01-17T00:02:57.973647176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:02:58.199449 containerd[1933]: time="2026-01-17T00:02:58.199040321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b1e35ba8-8440-42a2-87b9-411538f9db08,Namespace:default,Attempt:0,}" Jan 17 00:02:58.257065 containerd[1933]: time="2026-01-17T00:02:58.256998893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:58.260163 containerd[1933]: time="2026-01-17T00:02:58.260054093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:02:58.260354 containerd[1933]: time="2026-01-17T00:02:58.260234801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:02:58.261578 kubelet[2386]: E0117 00:02:58.260760 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:58.261578 kubelet[2386]: E0117 00:02:58.260826 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:58.261578 kubelet[2386]: E0117 00:02:58.260993 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:58.263568 kubelet[2386]: E0117 00:02:58.262655 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:02:58.406576 systemd-networkd[1823]: cali60e51b789ff: Link UP Jan 17 00:02:58.408788 systemd-networkd[1823]: cali60e51b789ff: Gained carrier Jan 17 00:02:58.413924 (udev-worker)[3778]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.290 [INFO][3759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.180-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b1e35ba8-8440-42a2-87b9-411538f9db08 1384 0 2026-01-17 00:02:57 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.23.180 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.290 [INFO][3759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.332 [INFO][3771] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" HandleID="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Workload="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.333 [INFO][3771] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" HandleID="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Workload="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af70), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.180", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-17 00:02:58.332769713 +0000 UTC"}, Hostname:"172.31.23.180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.333 [INFO][3771] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.333 [INFO][3771] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.333 [INFO][3771] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.180' Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.357 [INFO][3771] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.364 [INFO][3771] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.370 [INFO][3771] ipam/ipam.go 511: Trying affinity for 192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.373 [INFO][3771] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.377 [INFO][3771] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.377 [INFO][3771] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.380 [INFO][3771] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53 Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.386 [INFO][3771] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.398 [INFO][3771] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.3/26] block=192.168.19.0/26 handle="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.398 [INFO][3771] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.3/26] handle="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" host="172.31.23.180" Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.398 [INFO][3771] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:58.435214 containerd[1933]: 2026-01-17 00:02:58.398 [INFO][3771] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.3/26] IPv6=[] ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" HandleID="k8s-pod-network.e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Workload="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.438941 containerd[1933]: 2026-01-17 00:02:58.401 [INFO][3759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b1e35ba8-8440-42a2-87b9-411538f9db08", ResourceVersion:"1384", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:58.438941 containerd[1933]: 2026-01-17 00:02:58.401 [INFO][3759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.3/32] ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.438941 containerd[1933]: 2026-01-17 00:02:58.401 [INFO][3759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.438941 containerd[1933]: 2026-01-17 00:02:58.409 [INFO][3759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.439277 containerd[1933]: 2026-01-17 00:02:58.410 [INFO][3759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b1e35ba8-8440-42a2-87b9-411538f9db08", ResourceVersion:"1384", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9a:dc:db:a9:2b:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:58.439277 containerd[1933]: 2026-01-17 00:02:58.429 [INFO][3759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.23.180-k8s-nfs--server--provisioner--0-eth0" Jan 17 00:02:58.473401 containerd[1933]: time="2026-01-17T00:02:58.472411878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:58.473401 containerd[1933]: time="2026-01-17T00:02:58.472676550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:58.473401 containerd[1933]: time="2026-01-17T00:02:58.472777338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:58.475717 containerd[1933]: time="2026-01-17T00:02:58.473637198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:58.496975 kubelet[2386]: E0117 00:02:58.496884 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:58.516521 systemd[1]: run-containerd-runc-k8s.io-e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53-runc.KyuFLs.mount: Deactivated successfully. Jan 17 00:02:58.526873 systemd[1]: Started cri-containerd-e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53.scope - libcontainer container e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53. Jan 17 00:02:58.590172 containerd[1933]: time="2026-01-17T00:02:58.590011987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b1e35ba8-8440-42a2-87b9-411538f9db08,Namespace:default,Attempt:0,} returns sandbox id \"e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53\"" Jan 17 00:02:58.593630 containerd[1933]: time="2026-01-17T00:02:58.593390023Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 00:02:59.498133 kubelet[2386]: E0117 00:02:59.498046 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:02:59.587872 systemd-networkd[1823]: cali60e51b789ff: Gained IPv6LL Jan 17 00:03:00.498862 kubelet[2386]: E0117 00:03:00.498819 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:01.110766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228890469.mount: Deactivated successfully. Jan 17 00:03:01.501295 kubelet[2386]: E0117 00:03:01.499931 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:02.036884 ntpd[1906]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:03:02.037361 ntpd[1906]: 17 Jan 00:03:02 ntpd[1906]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:03:02.500858 kubelet[2386]: E0117 00:03:02.500797 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:03.501346 kubelet[2386]: E0117 00:03:03.501273 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:04.230582 containerd[1933]: time="2026-01-17T00:03:04.230503691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:04.236088 containerd[1933]: time="2026-01-17T00:03:04.236026031Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 17 00:03:04.236598 containerd[1933]: time="2026-01-17T00:03:04.236486999Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:04.247863 containerd[1933]: time="2026-01-17T00:03:04.247744679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:04.249957 containerd[1933]: time="2026-01-17T00:03:04.249882851Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.656420096s" Jan 17 00:03:04.250087 containerd[1933]: time="2026-01-17T00:03:04.249956459Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 17 00:03:04.258243 containerd[1933]: time="2026-01-17T00:03:04.258192275Z" level=info msg="CreateContainer within sandbox \"e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 00:03:04.290132 containerd[1933]: time="2026-01-17T00:03:04.290036531Z" level=info msg="CreateContainer within sandbox \"e4496d886ecdf5f86626726103a28091963228c357c188a67bcc136058617b53\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9209f93e146ef6aaed378f04e840c2bbd332e31eae690ab3a112366a1d50b176\"" Jan 17 00:03:04.291596 containerd[1933]: time="2026-01-17T00:03:04.290978375Z" level=info msg="StartContainer for \"9209f93e146ef6aaed378f04e840c2bbd332e31eae690ab3a112366a1d50b176\"" Jan 17 00:03:04.346836 systemd[1]: Started cri-containerd-9209f93e146ef6aaed378f04e840c2bbd332e31eae690ab3a112366a1d50b176.scope - libcontainer container 9209f93e146ef6aaed378f04e840c2bbd332e31eae690ab3a112366a1d50b176. Jan 17 00:03:04.399093 containerd[1933]: time="2026-01-17T00:03:04.399010967Z" level=info msg="StartContainer for \"9209f93e146ef6aaed378f04e840c2bbd332e31eae690ab3a112366a1d50b176\" returns successfully" Jan 17 00:03:04.501923 kubelet[2386]: E0117 00:03:04.501726 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:05.502435 kubelet[2386]: E0117 00:03:05.502373 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:06.503410 kubelet[2386]: E0117 00:03:06.503340 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:07.504096 kubelet[2386]: E0117 00:03:07.504034 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:08.504820 kubelet[2386]: E0117 00:03:08.504754 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:09.505505 kubelet[2386]: E0117 00:03:09.505444 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:10.506059 kubelet[2386]: E0117 00:03:10.505990 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:11.507150 kubelet[2386]: E0117 00:03:11.507094 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:11.690230 kubelet[2386]: E0117 00:03:11.690085 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:03:11.717555 kubelet[2386]: I0117 00:03:11.717435 2386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=9.058302144 podStartE2EDuration="14.717416576s" podCreationTimestamp="2026-01-17 00:02:57 +0000 UTC" firstStartedPulling="2026-01-17 00:02:58.592900147 +0000 UTC m=+46.171375707" lastFinishedPulling="2026-01-17 00:03:04.252014579 +0000 UTC m=+51.830490139" observedRunningTime="2026-01-17 00:03:04.89768519 +0000 UTC m=+52.476160786" watchObservedRunningTime="2026-01-17 00:03:11.717416576 +0000 UTC m=+59.295892148" Jan 17 00:03:12.507950 kubelet[2386]: E0117 00:03:12.507854 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:13.461973 kubelet[2386]: E0117 00:03:13.461905 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:13.492859 containerd[1933]: time="2026-01-17T00:03:13.491469909Z" level=info msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" Jan 17 00:03:13.508889 kubelet[2386]: E0117 00:03:13.508818 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.573 [WARNING][3950] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-csi--node--driver--n79dp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c8a3b18-b170-4670-a5f2-08284d1de243", ResourceVersion:"1479", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8", Pod:"csi-node-driver-n79dp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4668e9419e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.574 [INFO][3950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.574 [INFO][3950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" iface="eth0" netns="" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.574 [INFO][3950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.574 [INFO][3950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.610 [INFO][3957] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.611 [INFO][3957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.611 [INFO][3957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.627 [WARNING][3957] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.627 [INFO][3957] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.629 [INFO][3957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:13.635062 containerd[1933]: 2026-01-17 00:03:13.632 [INFO][3950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.635062 containerd[1933]: time="2026-01-17T00:03:13.634944069Z" level=info msg="TearDown network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" successfully" Jan 17 00:03:13.635062 containerd[1933]: time="2026-01-17T00:03:13.634981221Z" level=info msg="StopPodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" returns successfully" Jan 17 00:03:13.637554 containerd[1933]: time="2026-01-17T00:03:13.636922509Z" level=info msg="RemovePodSandbox for \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" Jan 17 00:03:13.637554 containerd[1933]: time="2026-01-17T00:03:13.636974265Z" level=info msg="Forcibly stopping sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\"" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.693 [WARNING][3971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-csi--node--driver--n79dp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c8a3b18-b170-4670-a5f2-08284d1de243", ResourceVersion:"1479", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"996ea61e9b481a9794de6c239ba69be99a4d53ec169b2a2913f6be8586a3d5d8", Pod:"csi-node-driver-n79dp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4668e9419e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.694 [INFO][3971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.694 [INFO][3971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" iface="eth0" netns="" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.694 [INFO][3971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.694 [INFO][3971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.730 [INFO][3979] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.730 [INFO][3979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.730 [INFO][3979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.756 [WARNING][3979] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.757 [INFO][3979] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" HandleID="k8s-pod-network.3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Workload="172.31.23.180-k8s-csi--node--driver--n79dp-eth0" Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.759 [INFO][3979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:13.769701 containerd[1933]: 2026-01-17 00:03:13.764 [INFO][3971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc" Jan 17 00:03:13.769701 containerd[1933]: time="2026-01-17T00:03:13.769648462Z" level=info msg="TearDown network for sandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" successfully" Jan 17 00:03:13.778605 containerd[1933]: time="2026-01-17T00:03:13.778113190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:13.778605 containerd[1933]: time="2026-01-17T00:03:13.778215454Z" level=info msg="RemovePodSandbox \"3c3405eb2945bb7c85de31343d5746d077d6578dc3cb8d57d39a57f8acb862cc\" returns successfully" Jan 17 00:03:13.779478 containerd[1933]: time="2026-01-17T00:03:13.779034010Z" level=info msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.839 [WARNING][3995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a293579e-6316-406d-98a2-565c24e14f2c", ResourceVersion:"1329", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa", Pod:"nginx-deployment-7fcdb87857-4ztbm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7e056047c84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.839 [INFO][3995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.839 [INFO][3995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" iface="eth0" netns="" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.839 [INFO][3995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.839 [INFO][3995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.875 [INFO][4003] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.875 [INFO][4003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.875 [INFO][4003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.888 [WARNING][4003] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.888 [INFO][4003] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.890 [INFO][4003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:13.895591 containerd[1933]: 2026-01-17 00:03:13.892 [INFO][3995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:13.895591 containerd[1933]: time="2026-01-17T00:03:13.895495751Z" level=info msg="TearDown network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" successfully" Jan 17 00:03:13.895591 containerd[1933]: time="2026-01-17T00:03:13.895564391Z" level=info msg="StopPodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" returns successfully" Jan 17 00:03:13.896500 containerd[1933]: time="2026-01-17T00:03:13.896422223Z" level=info msg="RemovePodSandbox for \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" Jan 17 00:03:13.896500 containerd[1933]: time="2026-01-17T00:03:13.896467655Z" level=info msg="Forcibly stopping sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\"" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.956 [WARNING][4017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"a293579e-6316-406d-98a2-565c24e14f2c", ResourceVersion:"1329", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"57a6d9fccfa335a40c5adeea350a196d02d60985178e6d808744f12e2215c9aa", Pod:"nginx-deployment-7fcdb87857-4ztbm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7e056047c84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.957 [INFO][4017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.957 [INFO][4017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" iface="eth0" netns="" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.957 [INFO][4017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.957 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.996 [INFO][4024] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.996 [INFO][4024] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:13.996 [INFO][4024] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:14.009 [WARNING][4024] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:14.009 [INFO][4024] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" HandleID="k8s-pod-network.a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Workload="172.31.23.180-k8s-nginx--deployment--7fcdb87857--4ztbm-eth0" Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:14.011 [INFO][4024] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:14.016081 containerd[1933]: 2026-01-17 00:03:14.013 [INFO][4017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171" Jan 17 00:03:14.016947 containerd[1933]: time="2026-01-17T00:03:14.016153207Z" level=info msg="TearDown network for sandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" successfully" Jan 17 00:03:14.021991 containerd[1933]: time="2026-01-17T00:03:14.021837151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:14.021991 containerd[1933]: time="2026-01-17T00:03:14.021925663Z" level=info msg="RemovePodSandbox \"a88be691a081553393ebfc54c446d1e2760eb70b173ab76beb59ca7763a9c171\" returns successfully" Jan 17 00:03:14.509865 kubelet[2386]: E0117 00:03:14.509811 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:15.510882 kubelet[2386]: E0117 00:03:15.510803 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:16.511643 kubelet[2386]: E0117 00:03:16.511588 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:17.512122 kubelet[2386]: E0117 00:03:17.512061 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:18.512462 kubelet[2386]: E0117 00:03:18.512392 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:19.512933 kubelet[2386]: E0117 00:03:19.512875 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:20.513900 kubelet[2386]: E0117 00:03:20.513841 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:21.514445 kubelet[2386]: E0117 00:03:21.514382 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:22.514633 kubelet[2386]: E0117 00:03:22.514570 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:23.515037 kubelet[2386]: E0117 00:03:23.514972 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:23.694224 containerd[1933]: time="2026-01-17T00:03:23.694109659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:24.004733 containerd[1933]: time="2026-01-17T00:03:24.004655477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:24.006922 containerd[1933]: time="2026-01-17T00:03:24.006845477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:24.007056 containerd[1933]: time="2026-01-17T00:03:24.006975137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:24.008107 kubelet[2386]: E0117 00:03:24.007254 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:24.008107 kubelet[2386]: E0117 00:03:24.007314 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:24.008107 kubelet[2386]: E0117 00:03:24.007492 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:24.010210 containerd[1933]: time="2026-01-17T00:03:24.009883829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:24.448516 containerd[1933]: time="2026-01-17T00:03:24.448440535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:24.450649 containerd[1933]: time="2026-01-17T00:03:24.450583771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:24.450727 containerd[1933]: time="2026-01-17T00:03:24.450707035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:24.450959 kubelet[2386]: E0117 00:03:24.450899 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:24.451074 kubelet[2386]: E0117 00:03:24.450967 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:24.451220 kubelet[2386]: E0117 00:03:24.451146 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:24.452819 kubelet[2386]: E0117 00:03:24.452747 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:03:24.515942 kubelet[2386]: E0117 00:03:24.515885 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:24.881923 systemd[1]: Created slice kubepods-besteffort-podf5890ac1_55af_461d_b2f8_2129cc563ed6.slice - libcontainer container kubepods-besteffort-podf5890ac1_55af_461d_b2f8_2129cc563ed6.slice. Jan 17 00:03:24.978586 kubelet[2386]: I0117 00:03:24.978484 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-98814f84-9fef-4223-927e-68483e76e656\" (UniqueName: \"kubernetes.io/nfs/f5890ac1-55af-461d-b2f8-2129cc563ed6-pvc-98814f84-9fef-4223-927e-68483e76e656\") pod \"test-pod-1\" (UID: \"f5890ac1-55af-461d-b2f8-2129cc563ed6\") " pod="default/test-pod-1" Jan 17 00:03:24.978586 kubelet[2386]: I0117 00:03:24.978577 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dhws\" (UniqueName: \"kubernetes.io/projected/f5890ac1-55af-461d-b2f8-2129cc563ed6-kube-api-access-5dhws\") pod \"test-pod-1\" (UID: \"f5890ac1-55af-461d-b2f8-2129cc563ed6\") " pod="default/test-pod-1" Jan 17 00:03:25.115596 kernel: FS-Cache: Loaded Jan 17 00:03:25.161813 kernel: RPC: Registered named UNIX socket transport module. Jan 17 00:03:25.161912 kernel: RPC: Registered udp transport module. Jan 17 00:03:25.161990 kernel: RPC: Registered tcp transport module. Jan 17 00:03:25.164153 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 00:03:25.164249 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 00:03:25.489226 kernel: NFS: Registering the id_resolver key type Jan 17 00:03:25.489353 kernel: Key type id_resolver registered Jan 17 00:03:25.489393 kernel: Key type id_legacy registered Jan 17 00:03:25.516495 kubelet[2386]: E0117 00:03:25.516433 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:25.526015 nfsidmap[4071]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:03:25.532505 nfsidmap[4073]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 00:03:25.787700 containerd[1933]: time="2026-01-17T00:03:25.787351990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f5890ac1-55af-461d-b2f8-2129cc563ed6,Namespace:default,Attempt:0,}" Jan 17 00:03:25.983330 (udev-worker)[4057]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:03:25.988406 systemd-networkd[1823]: cali5ec59c6bf6e: Link UP Jan 17 00:03:25.991516 systemd-networkd[1823]: cali5ec59c6bf6e: Gained carrier Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.873 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.23.180-k8s-test--pod--1-eth0 default f5890ac1-55af-461d-b2f8-2129cc563ed6 1547 0 2026-01-17 00:02:59 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.23.180 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.873 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.914 [INFO][4086] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" HandleID="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Workload="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.914 [INFO][4086] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" HandleID="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Workload="172.31.23.180-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b180), Attrs:map[string]string{"namespace":"default", "node":"172.31.23.180", "pod":"test-pod-1", "timestamp":"2026-01-17 00:03:25.914018254 +0000 UTC"}, Hostname:"172.31.23.180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.914 [INFO][4086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.914 [INFO][4086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.914 [INFO][4086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.23.180' Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.928 [INFO][4086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.935 [INFO][4086] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.941 [INFO][4086] ipam/ipam.go 511: Trying affinity for 192.168.19.0/26 host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.943 [INFO][4086] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.949 [INFO][4086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.0/26 host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.949 [INFO][4086] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.0/26 handle="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.952 [INFO][4086] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336 Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.959 [INFO][4086] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.0/26 handle="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.972 [INFO][4086] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.4/26] block=192.168.19.0/26 handle="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.972 [INFO][4086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.4/26] handle="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" host="172.31.23.180" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.972 [INFO][4086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.972 [INFO][4086] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.4/26] IPv6=[] ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" HandleID="k8s-pod-network.28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Workload="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.016528 containerd[1933]: 2026-01-17 00:03:25.975 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f5890ac1-55af-461d-b2f8-2129cc563ed6", ResourceVersion:"1547", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:26.018431 containerd[1933]: 2026-01-17 00:03:25.975 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.4/32] ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.018431 containerd[1933]: 2026-01-17 00:03:25.976 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.018431 containerd[1933]: 2026-01-17 00:03:25.993 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.018431 containerd[1933]: 2026-01-17 00:03:25.994 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.180-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f5890ac1-55af-461d-b2f8-2129cc563ed6", ResourceVersion:"1547", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.23.180", ContainerID:"28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8e:4a:b4:fd:93:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:26.018431 containerd[1933]: 2026-01-17 00:03:26.013 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.23.180-k8s-test--pod--1-eth0" Jan 17 00:03:26.058570 containerd[1933]: time="2026-01-17T00:03:26.057371683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:26.058570 containerd[1933]: time="2026-01-17T00:03:26.057673543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:26.058570 containerd[1933]: time="2026-01-17T00:03:26.057805351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:26.058570 containerd[1933]: time="2026-01-17T00:03:26.058160731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:26.090066 systemd[1]: Started cri-containerd-28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336.scope - libcontainer container 28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336. Jan 17 00:03:26.159482 containerd[1933]: time="2026-01-17T00:03:26.159420164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f5890ac1-55af-461d-b2f8-2129cc563ed6,Namespace:default,Attempt:0,} returns sandbox id \"28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336\"" Jan 17 00:03:26.162518 containerd[1933]: time="2026-01-17T00:03:26.162441872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 00:03:26.489849 containerd[1933]: time="2026-01-17T00:03:26.488265417Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:26.490768 containerd[1933]: time="2026-01-17T00:03:26.490710681Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 00:03:26.496514 containerd[1933]: time="2026-01-17T00:03:26.496460733Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"62401271\" in 333.934969ms" Jan 17 00:03:26.496717 containerd[1933]: time="2026-01-17T00:03:26.496687005Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d8ce8e982176f4e6830314cee19497d3297547f34d69b16a7d7e767c19c79049\"" Jan 17 00:03:26.505474 containerd[1933]: time="2026-01-17T00:03:26.505416621Z" level=info msg="CreateContainer within sandbox \"28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 00:03:26.517488 kubelet[2386]: E0117 00:03:26.517446 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:26.534228 containerd[1933]: time="2026-01-17T00:03:26.534158373Z" level=info msg="CreateContainer within sandbox \"28d3b9104ef3989fc55f900089383328ee22143fa7ddf31c23de187365122336\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696\"" Jan 17 00:03:26.535768 containerd[1933]: time="2026-01-17T00:03:26.535620045Z" level=info msg="StartContainer for \"73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696\"" Jan 17 00:03:26.598869 systemd[1]: Started cri-containerd-73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696.scope - libcontainer container 73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696. Jan 17 00:03:26.648299 containerd[1933]: time="2026-01-17T00:03:26.648227494Z" level=info msg="StartContainer for \"73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696\" returns successfully" Jan 17 00:03:27.115701 systemd[1]: run-containerd-runc-k8s.io-73e2dda88d6d398e9c9bb02897623437dc2efb33ed1d475be932739e68250696-runc.mMdoIX.mount: Deactivated successfully. Jan 17 00:03:27.518607 kubelet[2386]: E0117 00:03:27.518398 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:27.681738 systemd-networkd[1823]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 00:03:28.519205 kubelet[2386]: E0117 00:03:28.519144 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:29.520180 kubelet[2386]: E0117 00:03:29.520109 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:30.036870 ntpd[1906]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:03:30.037751 ntpd[1906]: 17 Jan 00:03:30 ntpd[1906]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:03:30.520723 kubelet[2386]: E0117 00:03:30.520662 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:31.521180 kubelet[2386]: E0117 00:03:31.521121 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:32.522257 kubelet[2386]: E0117 00:03:32.522189 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:33.462061 kubelet[2386]: E0117 00:03:33.462002 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:33.522971 kubelet[2386]: E0117 00:03:33.522905 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:34.523112 kubelet[2386]: E0117 00:03:34.523053 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:35.524048 kubelet[2386]: E0117 00:03:35.523984 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:36.524178 kubelet[2386]: E0117 00:03:36.524118 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:36.690274 kubelet[2386]: E0117 00:03:36.690174 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:03:37.525131 kubelet[2386]: E0117 00:03:37.525056 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:38.526098 kubelet[2386]: E0117 00:03:38.526040 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:39.526870 kubelet[2386]: E0117 00:03:39.526801 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:40.527207 kubelet[2386]: E0117 00:03:40.527141 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:41.528132 kubelet[2386]: E0117 00:03:41.528057 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:42.528559 kubelet[2386]: E0117 00:03:42.528480 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:43.529410 kubelet[2386]: E0117 00:03:43.529346 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:44.530315 kubelet[2386]: E0117 00:03:44.530242 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:45.530717 kubelet[2386]: E0117 00:03:45.530646 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:45.566245 kubelet[2386]: E0117 00:03:45.565756 2386 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": context deadline exceeded" Jan 17 00:03:46.530832 kubelet[2386]: E0117 00:03:46.530765 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:47.531298 kubelet[2386]: E0117 00:03:47.531239 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:48.531742 kubelet[2386]: E0117 00:03:48.531675 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:49.532631 kubelet[2386]: E0117 00:03:49.532563 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:50.533266 kubelet[2386]: E0117 00:03:50.533200 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:51.533445 kubelet[2386]: E0117 00:03:51.533370 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:51.690717 kubelet[2386]: E0117 00:03:51.690410 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:03:52.533597 kubelet[2386]: E0117 00:03:52.533512 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:53.462563 kubelet[2386]: E0117 00:03:53.462476 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:53.534678 kubelet[2386]: E0117 00:03:53.534632 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:54.535157 kubelet[2386]: E0117 00:03:54.535095 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:55.535315 kubelet[2386]: E0117 00:03:55.535255 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:55.566799 kubelet[2386]: E0117 00:03:55.566572 2386 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:03:56.535687 kubelet[2386]: E0117 00:03:56.535623 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:57.536854 kubelet[2386]: E0117 00:03:57.536787 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:58.537008 kubelet[2386]: E0117 00:03:58.536945 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:03:59.537646 kubelet[2386]: E0117 00:03:59.537587 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:00.538608 kubelet[2386]: E0117 00:04:00.538520 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:01.538766 kubelet[2386]: E0117 00:04:01.538707 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:02.539642 kubelet[2386]: E0117 00:04:02.539584 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:03.540616 kubelet[2386]: E0117 00:04:03.540563 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:04.541564 kubelet[2386]: E0117 00:04:04.541499 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:05.541915 kubelet[2386]: E0117 00:04:05.541849 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:05.567213 kubelet[2386]: E0117 00:04:05.567088 2386 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": context deadline exceeded" Jan 17 00:04:05.689838 containerd[1933]: time="2026-01-17T00:04:05.689714160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:04:05.992498 containerd[1933]: time="2026-01-17T00:04:05.992421061Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:05.993633 containerd[1933]: time="2026-01-17T00:04:05.993554917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:04:05.993729 containerd[1933]: time="2026-01-17T00:04:05.993589825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:04:05.994211 kubelet[2386]: E0117 00:04:05.993869 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:05.994211 kubelet[2386]: E0117 00:04:05.993930 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:05.994211 kubelet[2386]: E0117 00:04:05.994113 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:05.996647 containerd[1933]: time="2026-01-17T00:04:05.996592489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:04:06.267727 containerd[1933]: time="2026-01-17T00:04:06.267416195Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:06.268856 containerd[1933]: time="2026-01-17T00:04:06.268754195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:04:06.269000 containerd[1933]: time="2026-01-17T00:04:06.268920803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:04:06.269570 kubelet[2386]: E0117 00:04:06.269203 2386 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:06.269570 kubelet[2386]: E0117 00:04:06.269272 2386 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:06.269570 kubelet[2386]: E0117 00:04:06.269445 2386 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm7rz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n79dp_calico-system(9c8a3b18-b170-4670-a5f2-08284d1de243): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:06.270794 kubelet[2386]: E0117 00:04:06.270715 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:04:06.542971 kubelet[2386]: E0117 00:04:06.542823 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:07.543900 kubelet[2386]: E0117 00:04:07.543827 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:08.544186 kubelet[2386]: E0117 00:04:08.544126 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:09.545118 kubelet[2386]: E0117 00:04:09.544891 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:10.545974 kubelet[2386]: E0117 00:04:10.545912 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:10.691645 kubelet[2386]: E0117 00:04:10.691454 2386 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{csi-node-driver-n79dp.188b5bc126b21371 calico-system 1478 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-n79dp,UID:9c8a3b18-b170-4670-a5f2-08284d1de243,APIVersion:v1,ResourceVersion:947,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.23.180,},FirstTimestamp:2026-01-17 00:02:43 +0000 UTC,LastTimestamp:2026-01-17 00:03:36.68895056 +0000 UTC m=+84.267426132,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.180,}" Jan 17 00:04:11.546523 kubelet[2386]: E0117 00:04:11.546457 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:12.547378 kubelet[2386]: E0117 00:04:12.547309 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:13.461903 kubelet[2386]: E0117 00:04:13.461846 2386 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:13.548444 kubelet[2386]: E0117 00:04:13.548324 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:14.548566 kubelet[2386]: E0117 00:04:14.548458 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:15.548883 kubelet[2386]: E0117 00:04:15.548805 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:15.568526 kubelet[2386]: E0117 00:04:15.568281 2386 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": context deadline exceeded" Jan 17 00:04:16.549048 kubelet[2386]: E0117 00:04:16.548984 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:17.549548 kubelet[2386]: E0117 00:04:17.549478 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:17.849077 kubelet[2386]: E0117 00:04:17.847341 2386 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": unexpected EOF" Jan 17 00:04:17.849077 kubelet[2386]: I0117 00:04:17.847423 2386 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 17 00:04:17.849077 kubelet[2386]: E0117 00:04:17.848025 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.25.177:6443/api/v1/namespaces/calico-system/events/csi-node-driver-n79dp.188b5bc126b294e9\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-n79dp.188b5bc126b294e9 calico-system 1480 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-n79dp,UID:9c8a3b18-b170-4670-a5f2-08284d1de243,APIVersion:v1,ResourceVersion:947,FieldPath:spec.containers{calico-csi},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:172.31.23.180,},FirstTimestamp:2026-01-17 00:02:43 +0000 UTC,LastTimestamp:2026-01-17 00:03:36.688980008 +0000 UTC m=+84.267455616,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.23.180,}" Jan 17 00:04:18.550100 kubelet[2386]: E0117 00:04:18.550045 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:18.848348 kubelet[2386]: I0117 00:04:18.848186 2386 status_manager.go:895] "Failed to get status for pod" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" pod="calico-system/csi-node-driver-n79dp" err="Get \"https://172.31.25.177:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-n79dp\": dial tcp 172.31.25.177:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 17 00:04:18.849453 kubelet[2386]: I0117 00:04:18.849185 2386 status_manager.go:895] "Failed to get status for pod" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" pod="calico-system/csi-node-driver-n79dp" err="Get \"https://172.31.25.177:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-n79dp\": dial tcp 172.31.25.177:6443: connect: connection refused" Jan 17 00:04:18.850523 kubelet[2386]: I0117 00:04:18.850351 2386 status_manager.go:895] "Failed to get status for pod" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" pod="calico-system/csi-node-driver-n79dp" err="Get \"https://172.31.25.177:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-n79dp\": dial tcp 172.31.25.177:6443: connect: connection refused" Jan 17 00:04:18.860259 kubelet[2386]: E0117 00:04:18.859055 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": dial tcp 172.31.25.177:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.23.180:48782->172.31.25.177:6443: read: connection reset by peer" interval="200ms" Jan 17 00:04:19.550607 kubelet[2386]: E0117 00:04:19.550548 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:20.551553 kubelet[2386]: E0117 00:04:20.551468 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:20.689935 kubelet[2386]: E0117 00:04:20.689868 2386 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n79dp" podUID="9c8a3b18-b170-4670-a5f2-08284d1de243" Jan 17 00:04:21.552345 kubelet[2386]: E0117 00:04:21.552287 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:22.552659 kubelet[2386]: E0117 00:04:22.552599 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:23.552827 kubelet[2386]: E0117 00:04:23.552736 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:24.553975 kubelet[2386]: E0117 00:04:24.553916 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:25.554375 kubelet[2386]: E0117 00:04:25.554308 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:26.555018 kubelet[2386]: E0117 00:04:26.554962 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:27.555400 kubelet[2386]: E0117 00:04:27.555343 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:28.556433 kubelet[2386]: E0117 00:04:28.556367 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:29.059693 kubelet[2386]: E0117 00:04:29.059623 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.177:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.180?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jan 17 00:04:29.556730 kubelet[2386]: E0117 00:04:29.556673 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 00:04:30.557460 kubelet[2386]: E0117 00:04:30.557393 2386 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"