Jan 23 23:56:43.272501 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:56:43.272547 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:56:43.272572 kernel: KASLR disabled due to lack of seed Jan 23 23:56:43.272589 kernel: efi: EFI v2.7 by EDK II Jan 23 23:56:43.272605 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:56:43.272621 kernel: ACPI: Early table checksum verification disabled Jan 23 23:56:43.272638 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:56:43.272654 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:56:43.272684 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:56:43.272721 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:56:43.272774 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:56:43.272795 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:56:43.272812 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:56:43.272830 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:56:43.272849 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:56:43.272871 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:56:43.272889 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:56:43.272906 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:56:43.272923 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:56:43.272939 kernel: printk: bootconsole [uart0] enabled Jan 23 23:56:43.272956 kernel: NUMA: Failed to initialise from firmware Jan 23 23:56:43.272973 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:56:43.272989 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:56:43.273006 kernel: Zone ranges: Jan 23 23:56:43.273022 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:56:43.273039 kernel: DMA32 empty Jan 23 23:56:43.273059 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:56:43.273076 kernel: Movable zone start for each node Jan 23 23:56:43.273092 kernel: Early memory node ranges Jan 23 23:56:43.273109 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:56:43.273125 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:56:43.273141 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:56:43.273158 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:56:43.273174 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:56:43.273190 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:56:43.273344 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:56:43.273364 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:56:43.273381 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:56:43.273404 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:56:43.273421 kernel: psci: probing for conduit method from ACPI. Jan 23 23:56:43.273445 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:56:43.273463 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:56:43.273480 kernel: psci: Trusted OS migration not required Jan 23 23:56:43.273501 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:56:43.273519 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:56:43.273537 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:56:43.273554 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:56:43.273572 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:56:43.273589 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:56:43.273607 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:56:43.273624 kernel: CPU features: detected: Spectre-v2 Jan 23 23:56:43.273641 kernel: CPU features: detected: Spectre-v3a Jan 23 23:56:43.273659 kernel: CPU features: detected: Spectre-BHB Jan 23 23:56:43.273676 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:56:43.273697 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:56:43.273715 kernel: alternatives: applying boot alternatives Jan 23 23:56:43.273735 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:43.273753 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:56:43.273771 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:56:43.273788 kernel: Fallback order for Node 0: 0 Jan 23 23:56:43.273806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:56:43.273823 kernel: Policy zone: Normal Jan 23 23:56:43.273841 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:56:43.273859 kernel: software IO TLB: area num 2. Jan 23 23:56:43.273877 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:56:43.273901 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:56:43.273919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:56:43.273937 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:56:43.273956 kernel: rcu: RCU event tracing is enabled. Jan 23 23:56:43.273974 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:56:43.273991 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:56:43.274009 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:56:43.274027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:56:43.274044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:56:43.274062 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:56:43.274079 kernel: GICv3: 96 SPIs implemented Jan 23 23:56:43.274101 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:56:43.274118 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:56:43.274136 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:56:43.274153 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:56:43.274170 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:56:43.274188 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:56:43.274227 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:56:43.274248 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:56:43.274266 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:56:43.274283 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:56:43.274301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:56:43.274319 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:56:43.274343 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:56:43.274361 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:56:43.274380 kernel: Console: colour dummy device 80x25 Jan 23 23:56:43.274399 kernel: printk: console [tty1] enabled Jan 23 23:56:43.274417 kernel: ACPI: Core revision 20230628 Jan 23 23:56:43.274435 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:56:43.274453 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:56:43.274471 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:56:43.274489 kernel: landlock: Up and running. Jan 23 23:56:43.274511 kernel: SELinux: Initializing. Jan 23 23:56:43.274530 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:43.274548 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:56:43.274566 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:43.274585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:56:43.274603 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:56:43.274621 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:56:43.274639 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:56:43.274657 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:56:43.274679 kernel: Remapping and enabling EFI services. Jan 23 23:56:43.274697 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:56:43.274715 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:56:43.274733 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:56:43.274751 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:56:43.274769 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:56:43.274788 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:56:43.274805 kernel: SMP: Total of 2 processors activated. Jan 23 23:56:43.274823 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:56:43.274845 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:56:43.274864 kernel: CPU features: detected: CRC32 instructions Jan 23 23:56:43.274882 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:56:43.274912 kernel: alternatives: applying system-wide alternatives Jan 23 23:56:43.274936 kernel: devtmpfs: initialized Jan 23 23:56:43.274955 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:56:43.274974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:56:43.274992 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:56:43.275011 kernel: SMBIOS 3.0.0 present. Jan 23 23:56:43.275034 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:56:43.275053 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:56:43.275072 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:56:43.275090 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:56:43.275109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:56:43.275127 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:56:43.275146 kernel: audit: type=2000 audit(0.285:1): state=initialized audit_enabled=0 res=1 Jan 23 23:56:43.275165 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:56:43.275189 kernel: cpuidle: using governor menu Jan 23 23:56:43.275241 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:56:43.275264 kernel: ASID allocator initialised with 65536 entries Jan 23 23:56:43.275283 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:56:43.275302 kernel: Serial: AMBA PL011 UART driver Jan 23 23:56:43.275320 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:56:43.275339 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:56:43.275358 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:56:43.275377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:56:43.275405 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:56:43.275423 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:56:43.275442 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:56:43.275461 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:56:43.275479 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:56:43.275498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:56:43.275516 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:56:43.275534 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:56:43.275553 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:56:43.275576 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:56:43.275595 kernel: ACPI: Interpreter enabled Jan 23 23:56:43.275614 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:56:43.275633 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:56:43.275651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:56:43.275969 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:56:43.276267 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:56:43.276506 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:56:43.276732 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:56:43.276940 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:56:43.276967 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:56:43.276987 kernel: acpiphp: Slot [1] registered Jan 23 23:56:43.277007 kernel: acpiphp: Slot [2] registered Jan 23 23:56:43.277026 kernel: acpiphp: Slot [3] registered Jan 23 23:56:43.277047 kernel: acpiphp: Slot [4] registered Jan 23 23:56:43.277066 kernel: acpiphp: Slot [5] registered Jan 23 23:56:43.277093 kernel: acpiphp: Slot [6] registered Jan 23 23:56:43.277113 kernel: acpiphp: Slot [7] registered Jan 23 23:56:43.277131 kernel: acpiphp: Slot [8] registered Jan 23 23:56:43.277150 kernel: acpiphp: Slot [9] registered Jan 23 23:56:43.277168 kernel: acpiphp: Slot [10] registered Jan 23 23:56:43.277186 kernel: acpiphp: Slot [11] registered Jan 23 23:56:43.277246 kernel: acpiphp: Slot [12] registered Jan 23 23:56:43.277268 kernel: acpiphp: Slot [13] registered Jan 23 23:56:43.277287 kernel: acpiphp: Slot [14] registered Jan 23 23:56:43.277305 kernel: acpiphp: Slot [15] registered Jan 23 23:56:43.277331 kernel: acpiphp: Slot [16] registered Jan 23 23:56:43.277350 kernel: acpiphp: Slot [17] registered Jan 23 23:56:43.277368 kernel: acpiphp: Slot [18] registered Jan 23 23:56:43.277387 kernel: acpiphp: Slot [19] registered Jan 23 23:56:43.277405 kernel: acpiphp: Slot [20] registered Jan 23 23:56:43.277423 kernel: acpiphp: Slot [21] registered Jan 23 23:56:43.277442 kernel: acpiphp: Slot [22] registered Jan 23 23:56:43.277460 kernel: acpiphp: Slot [23] registered Jan 23 23:56:43.277479 kernel: acpiphp: Slot [24] registered Jan 23 23:56:43.277502 kernel: acpiphp: Slot [25] registered Jan 23 23:56:43.277521 kernel: acpiphp: Slot [26] registered Jan 23 23:56:43.277539 kernel: acpiphp: Slot [27] registered Jan 23 23:56:43.277558 kernel: acpiphp: Slot [28] registered Jan 23 23:56:43.277576 kernel: acpiphp: Slot [29] registered Jan 23 23:56:43.277594 kernel: acpiphp: Slot [30] registered Jan 23 23:56:43.277613 kernel: acpiphp: Slot [31] registered Jan 23 23:56:43.277631 kernel: PCI host bridge to bus 0000:00 Jan 23 23:56:43.277871 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:56:43.278073 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:56:43.280857 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:56:43.281078 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:56:43.281369 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:56:43.281602 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:56:43.281814 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:56:43.282046 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:56:43.282317 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:56:43.282526 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:56:43.282743 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:56:43.282945 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:56:43.283146 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:56:43.285065 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:56:43.286014 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:56:43.286272 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:56:43.286465 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:56:43.286647 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:56:43.286674 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:56:43.286695 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:56:43.286714 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:56:43.286733 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:56:43.286762 kernel: iommu: Default domain type: Translated Jan 23 23:56:43.286781 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:56:43.286800 kernel: efivars: Registered efivars operations Jan 23 23:56:43.286818 kernel: vgaarb: loaded Jan 23 23:56:43.286837 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:56:43.286855 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:56:43.286874 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:56:43.286892 kernel: pnp: PnP ACPI init Jan 23 23:56:43.287101 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:56:43.287134 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:56:43.287153 kernel: NET: Registered PF_INET protocol family Jan 23 23:56:43.287172 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:56:43.287191 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:56:43.287255 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:56:43.287276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:56:43.287295 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:56:43.287314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:56:43.287339 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:43.287358 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:56:43.287377 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:56:43.287395 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:56:43.287414 kernel: kvm [1]: HYP mode not available Jan 23 23:56:43.287432 kernel: Initialise system trusted keyrings Jan 23 23:56:43.287450 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:56:43.287469 kernel: Key type asymmetric registered Jan 23 23:56:43.287488 kernel: Asymmetric key parser 'x509' registered Jan 23 23:56:43.287511 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:56:43.287530 kernel: io scheduler mq-deadline registered Jan 23 23:56:43.287549 kernel: io scheduler kyber registered Jan 23 23:56:43.287567 kernel: io scheduler bfq registered Jan 23 23:56:43.287794 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:56:43.287822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:56:43.287842 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:56:43.287860 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:56:43.287885 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:56:43.287904 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:56:43.287923 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:56:43.290480 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:56:43.290512 kernel: printk: console [ttyS0] disabled Jan 23 23:56:43.290532 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:56:43.290551 kernel: printk: console [ttyS0] enabled Jan 23 23:56:43.290570 kernel: printk: bootconsole [uart0] disabled Jan 23 23:56:43.290589 kernel: thunder_xcv, ver 1.0 Jan 23 23:56:43.290607 kernel: thunder_bgx, ver 1.0 Jan 23 23:56:43.290634 kernel: nicpf, ver 1.0 Jan 23 23:56:43.290653 kernel: nicvf, ver 1.0 Jan 23 23:56:43.290882 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:56:43.291084 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:56:42 UTC (1769212602) Jan 23 23:56:43.291111 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:56:43.291131 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:56:43.291150 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:56:43.291176 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:56:43.291195 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:56:43.292307 kernel: Segment Routing with IPv6 Jan 23 23:56:43.292328 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:56:43.292347 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:56:43.292365 kernel: Key type dns_resolver registered Jan 23 23:56:43.292384 kernel: registered taskstats version 1 Jan 23 23:56:43.292403 kernel: Loading compiled-in X.509 certificates Jan 23 23:56:43.292421 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:56:43.292440 kernel: Key type .fscrypt registered Jan 23 23:56:43.292466 kernel: Key type fscrypt-provisioning registered Jan 23 23:56:43.292484 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:56:43.292503 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:56:43.292522 kernel: ima: No architecture policies found Jan 23 23:56:43.292540 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:56:43.292559 kernel: clk: Disabling unused clocks Jan 23 23:56:43.292577 kernel: Freeing unused kernel memory: 39424K Jan 23 23:56:43.292595 kernel: Run /init as init process Jan 23 23:56:43.292613 kernel: with arguments: Jan 23 23:56:43.292636 kernel: /init Jan 23 23:56:43.292654 kernel: with environment: Jan 23 23:56:43.292672 kernel: HOME=/ Jan 23 23:56:43.292690 kernel: TERM=linux Jan 23 23:56:43.292714 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:43.292738 systemd[1]: Detected virtualization amazon. Jan 23 23:56:43.292759 systemd[1]: Detected architecture arm64. Jan 23 23:56:43.292784 systemd[1]: Running in initrd. Jan 23 23:56:43.292804 systemd[1]: No hostname configured, using default hostname. Jan 23 23:56:43.292823 systemd[1]: Hostname set to . Jan 23 23:56:43.292844 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:56:43.292864 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:56:43.292884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:43.292905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:43.292926 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:56:43.292951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:43.292972 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:56:43.292993 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:56:43.293017 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:56:43.293038 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:56:43.293058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:43.293079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:43.293103 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:43.293124 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:43.293144 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:43.293164 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:43.293185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:43.293225 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:43.293248 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:56:43.293269 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:56:43.293289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:43.293316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:43.293337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:43.293358 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:43.293378 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:56:43.293398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:43.293419 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:56:43.293439 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:56:43.293459 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:43.293484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:43.293505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:43.293525 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:43.293545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:43.293566 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:56:43.293587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:56:43.293657 systemd-journald[250]: Collecting audit messages is disabled. Jan 23 23:56:43.293702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:56:43.293723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:43.293839 kernel: Bridge firewalling registered Jan 23 23:56:43.297271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:43.297298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:43.297320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:56:43.297343 systemd-journald[250]: Journal started Jan 23 23:56:43.297382 systemd-journald[250]: Runtime Journal (/run/log/journal/ec22a001872b47c8ecfc5066e9c1aa9b) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:56:43.211862 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:56:43.274369 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:56:43.302050 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:43.319562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:43.328673 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:43.340439 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:43.350476 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:43.362498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:56:43.387598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:43.396964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:43.400047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:43.416393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:43.428256 dracut-cmdline[284]: dracut-dracut-053 Jan 23 23:56:43.435055 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:56:43.499761 systemd-resolved[293]: Positive Trust Anchors: Jan 23 23:56:43.499797 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:43.499860 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:43.587231 kernel: SCSI subsystem initialized Jan 23 23:56:43.593237 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:56:43.606251 kernel: iscsi: registered transport (tcp) Jan 23 23:56:43.628899 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:56:43.628988 kernel: QLogic iSCSI HBA Driver Jan 23 23:56:43.732257 kernel: random: crng init done Jan 23 23:56:43.732839 systemd-resolved[293]: Defaulting to hostname 'linux'. Jan 23 23:56:43.737147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:43.744466 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:43.764807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:43.781500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:56:43.815650 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:56:43.815736 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:56:43.817994 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:56:43.884263 kernel: raid6: neonx8 gen() 6726 MB/s Jan 23 23:56:43.901236 kernel: raid6: neonx4 gen() 6539 MB/s Jan 23 23:56:43.918234 kernel: raid6: neonx2 gen() 5455 MB/s Jan 23 23:56:43.935248 kernel: raid6: neonx1 gen() 3934 MB/s Jan 23 23:56:43.952234 kernel: raid6: int64x8 gen() 3804 MB/s Jan 23 23:56:43.969238 kernel: raid6: int64x4 gen() 3716 MB/s Jan 23 23:56:43.986235 kernel: raid6: int64x2 gen() 3604 MB/s Jan 23 23:56:44.004327 kernel: raid6: int64x1 gen() 2748 MB/s Jan 23 23:56:44.004363 kernel: raid6: using algorithm neonx8 gen() 6726 MB/s Jan 23 23:56:44.023241 kernel: raid6: .... xor() 4784 MB/s, rmw enabled Jan 23 23:56:44.023284 kernel: raid6: using neon recovery algorithm Jan 23 23:56:44.031240 kernel: xor: measuring software checksum speed Jan 23 23:56:44.033524 kernel: 8regs : 10072 MB/sec Jan 23 23:56:44.033557 kernel: 32regs : 11910 MB/sec Jan 23 23:56:44.034845 kernel: arm64_neon : 9497 MB/sec Jan 23 23:56:44.034879 kernel: xor: using function: 32regs (11910 MB/sec) Jan 23 23:56:44.120260 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:56:44.139234 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:44.149522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:44.193511 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 23 23:56:44.203615 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:44.216690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:56:44.252248 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 23 23:56:44.310742 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:44.326588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:44.434344 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:44.452765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:56:44.499371 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:44.504845 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:44.510349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:44.513080 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:44.525743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:56:44.575580 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:44.635230 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:56:44.635301 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:56:44.638612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:44.649093 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:56:44.650264 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:56:44.638841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:44.649473 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:44.652089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:44.652397 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:44.655354 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:44.674716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:44.683253 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:34:ce:4b:4c:27 Jan 23 23:56:44.685996 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:44.699668 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:56:44.699738 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:56:44.714251 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:56:44.722433 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:56:44.722497 kernel: GPT:9289727 != 33554431 Jan 23 23:56:44.722523 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:56:44.726431 kernel: GPT:9289727 != 33554431 Jan 23 23:56:44.727592 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:56:44.725086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:44.735097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:44.745451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:56:44.776683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:44.817593 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (534) Jan 23 23:56:44.847251 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (525) Jan 23 23:56:44.937069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:56:44.958413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:56:44.987887 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:56:45.006161 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:56:45.007136 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:56:45.024495 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:56:45.039496 disk-uuid[662]: Primary Header is updated. Jan 23 23:56:45.039496 disk-uuid[662]: Secondary Entries is updated. Jan 23 23:56:45.039496 disk-uuid[662]: Secondary Header is updated. Jan 23 23:56:45.049358 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:46.075229 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:56:46.076608 disk-uuid[663]: The operation has completed successfully. Jan 23 23:56:46.271342 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:56:46.272654 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:56:46.308545 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:56:46.318849 sh[1008]: Success Jan 23 23:56:46.343231 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:56:46.458979 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:56:46.479424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:56:46.482348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:56:46.536611 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:56:46.536675 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:46.536713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:56:46.540023 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:56:46.540069 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:56:46.625242 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:56:46.640076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:56:46.644851 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:56:46.658458 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:56:46.670811 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:56:46.688127 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:46.688240 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:46.689759 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:46.696269 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:46.715038 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:56:46.720006 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:46.730532 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:56:46.742788 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:56:46.855516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:46.879459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:46.929351 systemd-networkd[1209]: lo: Link UP Jan 23 23:56:46.929371 systemd-networkd[1209]: lo: Gained carrier Jan 23 23:56:46.934818 systemd-networkd[1209]: Enumeration completed Jan 23 23:56:46.935112 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:46.939871 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:46.939878 systemd-networkd[1209]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:46.943397 systemd[1]: Reached target network.target - Network. Jan 23 23:56:46.956713 systemd-networkd[1209]: eth0: Link UP Jan 23 23:56:46.956733 systemd-networkd[1209]: eth0: Gained carrier Jan 23 23:56:46.956751 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:46.981357 systemd-networkd[1209]: eth0: DHCPv4 address 172.31.22.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:56:47.134837 ignition[1109]: Ignition 2.19.0 Jan 23 23:56:47.134864 ignition[1109]: Stage: fetch-offline Jan 23 23:56:47.139183 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:47.139248 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:47.141722 ignition[1109]: Ignition finished successfully Jan 23 23:56:47.148176 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:47.158482 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:56:47.179790 ignition[1217]: Ignition 2.19.0 Jan 23 23:56:47.179817 ignition[1217]: Stage: fetch Jan 23 23:56:47.180766 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:47.180794 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:47.180985 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:47.193955 ignition[1217]: PUT result: OK Jan 23 23:56:47.196899 ignition[1217]: parsed url from cmdline: "" Jan 23 23:56:47.196921 ignition[1217]: no config URL provided Jan 23 23:56:47.196939 ignition[1217]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:56:47.196965 ignition[1217]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:56:47.196997 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:47.198856 ignition[1217]: PUT result: OK Jan 23 23:56:47.210358 unknown[1217]: fetched base config from "system" Jan 23 23:56:47.198928 ignition[1217]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:56:47.210375 unknown[1217]: fetched base config from "system" Jan 23 23:56:47.203408 ignition[1217]: GET result: OK Jan 23 23:56:47.210389 unknown[1217]: fetched user config from "aws" Jan 23 23:56:47.203515 ignition[1217]: parsing config with SHA512: 992a76c0a54732e673fe6418dce16cbe2a338ff23cf0eb7695f3dbfa35fc734ec68637f31f7815a8d49ac205586910ec99f61b43899d424e69d759d3a44ed1b2 Jan 23 23:56:47.211954 ignition[1217]: fetch: fetch complete Jan 23 23:56:47.211966 ignition[1217]: fetch: fetch passed Jan 23 23:56:47.223465 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:56:47.212093 ignition[1217]: Ignition finished successfully Jan 23 23:56:47.237574 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:56:47.267908 ignition[1223]: Ignition 2.19.0 Jan 23 23:56:47.268449 ignition[1223]: Stage: kargs Jan 23 23:56:47.269132 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:47.269156 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:47.269370 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:47.279073 ignition[1223]: PUT result: OK Jan 23 23:56:47.287009 ignition[1223]: kargs: kargs passed Jan 23 23:56:47.287106 ignition[1223]: Ignition finished successfully Jan 23 23:56:47.293173 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:56:47.304775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:56:47.334191 ignition[1230]: Ignition 2.19.0 Jan 23 23:56:47.334231 ignition[1230]: Stage: disks Jan 23 23:56:47.335663 ignition[1230]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:47.335689 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:47.335847 ignition[1230]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:47.338157 ignition[1230]: PUT result: OK Jan 23 23:56:47.349003 ignition[1230]: disks: disks passed Jan 23 23:56:47.349098 ignition[1230]: Ignition finished successfully Jan 23 23:56:47.351339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:56:47.356002 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:47.362292 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:56:47.365229 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:47.369765 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:47.374563 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:47.390499 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:56:47.432485 systemd-fsck[1238]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:56:47.437502 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:56:47.451505 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:56:47.532276 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:56:47.532337 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:56:47.538704 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:47.552405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:47.562468 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:56:47.572552 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:56:47.572657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:56:47.572710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:47.596253 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1257) Jan 23 23:56:47.601921 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:56:47.607706 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:47.607835 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:47.607957 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:47.615261 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:47.626540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:56:47.633750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:47.855502 initrd-setup-root[1281]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:56:47.877723 initrd-setup-root[1288]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:56:47.887545 initrd-setup-root[1295]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:56:47.895389 initrd-setup-root[1302]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:56:48.195612 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:48.209795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:56:48.213890 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:56:48.233861 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:56:48.239254 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:48.279021 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:56:48.288994 ignition[1369]: INFO : Ignition 2.19.0 Jan 23 23:56:48.288994 ignition[1369]: INFO : Stage: mount Jan 23 23:56:48.292686 ignition[1369]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:48.295058 ignition[1369]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:48.297784 ignition[1369]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:48.301259 ignition[1369]: INFO : PUT result: OK Jan 23 23:56:48.305990 ignition[1369]: INFO : mount: mount passed Jan 23 23:56:48.307795 ignition[1369]: INFO : Ignition finished successfully Jan 23 23:56:48.314609 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:56:48.324369 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:56:48.544843 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:56:48.567803 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1381) Jan 23 23:56:48.567865 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:56:48.567893 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:56:48.569555 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:56:48.577227 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:56:48.578898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:56:48.620671 ignition[1398]: INFO : Ignition 2.19.0 Jan 23 23:56:48.622743 ignition[1398]: INFO : Stage: files Jan 23 23:56:48.624355 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:48.624355 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:48.624355 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:48.632979 ignition[1398]: INFO : PUT result: OK Jan 23 23:56:48.637771 ignition[1398]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:56:48.644185 ignition[1398]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:56:48.644185 ignition[1398]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:56:48.663378 systemd-networkd[1209]: eth0: Gained IPv6LL Jan 23 23:56:48.700361 ignition[1398]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:56:48.703421 ignition[1398]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:56:48.706327 ignition[1398]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:56:48.703993 unknown[1398]: wrote ssh authorized keys file for user: core Jan 23 23:56:48.712083 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:56:48.717389 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:56:49.175820 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 23 23:56:49.579341 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:56:49.584190 ignition[1398]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:49.584190 ignition[1398]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:56:49.584190 ignition[1398]: INFO : files: files passed Jan 23 23:56:49.584190 ignition[1398]: INFO : Ignition finished successfully Jan 23 23:56:49.590700 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:56:49.613543 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:56:49.617863 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:56:49.637590 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:56:49.642410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:56:49.654582 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:49.654582 initrd-setup-root-after-ignition[1426]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:49.663481 initrd-setup-root-after-ignition[1430]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:56:49.670283 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:49.674421 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:56:49.686462 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:56:49.753162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:56:49.753689 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:56:49.762130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:56:49.764877 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:56:49.769434 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:56:49.782554 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:56:49.811292 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:49.823671 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:56:49.855441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:56:49.855811 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:56:49.864133 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:49.866948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:49.874482 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:56:49.876635 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:56:49.876762 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:56:49.879690 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:56:49.882528 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:56:49.884609 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:56:49.888935 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:56:49.891584 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:56:49.894138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:56:49.898600 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:56:49.899061 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:56:49.899815 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:56:49.900167 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:56:49.900902 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:56:49.901017 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:56:49.902078 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:49.902741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:49.903104 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:56:49.916319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:49.919197 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:56:49.919478 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:56:49.923865 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:56:49.923953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:56:49.924151 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:56:49.924248 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:56:49.952433 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:56:49.979328 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:56:50.020180 ignition[1451]: INFO : Ignition 2.19.0 Jan 23 23:56:50.020180 ignition[1451]: INFO : Stage: umount Jan 23 23:56:49.979456 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:50.033858 ignition[1451]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:56:50.033858 ignition[1451]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:56:50.033858 ignition[1451]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:56:50.009613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:56:50.044935 ignition[1451]: INFO : PUT result: OK Jan 23 23:56:50.014629 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:56:50.014755 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:50.018554 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:56:50.018666 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:56:50.057481 ignition[1451]: INFO : umount: umount passed Jan 23 23:56:50.059392 ignition[1451]: INFO : Ignition finished successfully Jan 23 23:56:50.065842 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:56:50.068039 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:56:50.075726 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:56:50.075923 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:56:50.083844 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:56:50.083965 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:56:50.092283 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:56:50.092374 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:56:50.096801 systemd[1]: Stopped target network.target - Network. Jan 23 23:56:50.098921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:56:50.101050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:56:50.111754 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:56:50.117983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:56:50.124113 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:50.126888 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:56:50.128944 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:56:50.134224 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:56:50.134311 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:56:50.136641 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:56:50.136713 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:56:50.139032 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:56:50.139119 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:56:50.141451 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:56:50.141530 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:56:50.144217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:56:50.147493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:56:50.150279 systemd-networkd[1209]: eth0: DHCPv6 lease lost Jan 23 23:56:50.160421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:56:50.161896 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:56:50.162113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:56:50.184111 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:56:50.187899 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:56:50.198270 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:56:50.200378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:56:50.206975 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:56:50.207094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:50.209623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:56:50.209726 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:56:50.221085 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:56:50.228693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:56:50.229080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:56:50.239719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:56:50.239835 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:50.242748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:56:50.242833 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:50.245415 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:56:50.245495 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:50.248554 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:50.287076 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:56:50.287408 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:50.291009 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:56:50.291182 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:56:50.303428 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:56:50.303535 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:50.306091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:56:50.306156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:50.308957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:56:50.309045 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:56:50.311674 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:56:50.311762 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:56:50.323082 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:56:50.323187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:56:50.341771 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:56:50.351308 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:56:50.351542 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:50.360653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:56:50.360752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:50.384477 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:56:50.384687 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:56:50.387682 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:56:50.401469 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:56:50.432303 systemd[1]: Switching root. Jan 23 23:56:50.471304 systemd-journald[250]: Journal stopped Jan 23 23:56:52.861192 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 23 23:56:52.861346 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:56:52.861391 kernel: SELinux: policy capability open_perms=1 Jan 23 23:56:52.861422 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:56:52.861468 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:56:52.861499 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:56:52.861529 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:56:52.861559 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:56:52.861589 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:56:52.861619 kernel: audit: type=1403 audit(1769212611.056:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:56:52.861655 systemd[1]: Successfully loaded SELinux policy in 62.702ms. Jan 23 23:56:52.861707 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.503ms. Jan 23 23:56:52.861753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:56:52.861786 systemd[1]: Detected virtualization amazon. Jan 23 23:56:52.861817 systemd[1]: Detected architecture arm64. Jan 23 23:56:52.861848 systemd[1]: Detected first boot. Jan 23 23:56:52.861879 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:56:52.861912 zram_generator::config[1492]: No configuration found. Jan 23 23:56:52.861947 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:56:52.861980 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:56:52.862016 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:56:52.862048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:56:52.862081 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:56:52.862116 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:56:52.862148 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:56:52.862181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:56:52.862272 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:56:52.862310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:56:52.862345 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:56:52.862378 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:56:52.862411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:56:52.862441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:56:52.862471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:56:52.862500 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:56:52.862533 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:56:52.862569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:56:52.862601 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:56:52.862646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:56:52.862681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:56:52.862715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:56:52.862749 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:56:52.862780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:56:52.862823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:56:52.862855 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:56:52.862887 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:56:52.862918 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:56:52.862949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:56:52.862980 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:56:52.863015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:56:52.863047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:56:52.863078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:56:52.863108 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:56:52.863143 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:56:52.863174 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:56:52.863248 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:56:52.863294 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:56:52.863328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:56:52.863362 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:56:52.863395 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:56:52.863425 systemd[1]: Reached target machines.target - Containers. Jan 23 23:56:52.863460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:56:52.863492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:52.863522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:56:52.863552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:56:52.863586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:52.863619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:52.863653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:52.863684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:56:52.863715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:52.863754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:56:52.863785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:56:52.863815 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:56:52.863847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:56:52.863878 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:56:52.863908 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:56:52.863937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:56:52.863966 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:56:52.864021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:56:52.864060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:56:52.864092 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:56:52.864123 systemd[1]: Stopped verity-setup.service. Jan 23 23:56:52.864153 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:56:52.864185 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:56:52.864257 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:56:52.864297 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:56:52.864364 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:56:52.864396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:56:52.864428 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:56:52.864457 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:56:52.864486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:56:52.864516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:52.864550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:52.864633 systemd-journald[1577]: Collecting audit messages is disabled. Jan 23 23:56:52.864683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:52.864719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:52.864748 systemd-journald[1577]: Journal started Jan 23 23:56:52.864797 systemd-journald[1577]: Runtime Journal (/run/log/journal/ec22a001872b47c8ecfc5066e9c1aa9b) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:56:52.272915 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:56:52.318647 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:56:52.319459 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:56:52.871254 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:56:52.875817 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:56:52.879267 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:56:52.905239 kernel: fuse: init (API version 7.39) Jan 23 23:56:52.900645 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:56:52.916293 kernel: loop: module loaded Jan 23 23:56:52.915035 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:56:52.916316 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:56:52.919759 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:52.920133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:52.938029 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:56:52.950556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:56:52.954250 kernel: ACPI: bus type drm_connector registered Jan 23 23:56:52.960469 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:56:52.965441 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:56:52.965507 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:56:52.971103 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:56:52.983563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:56:52.990535 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:56:52.994550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:52.999100 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:56:53.015610 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:56:53.018376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:53.021418 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:56:53.024481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:53.030553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:56:53.043612 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:56:53.052430 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:56:53.055768 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:53.056425 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:53.059344 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:56:53.066822 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:56:53.070070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:56:53.092733 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:56:53.122807 systemd-journald[1577]: Time spent on flushing to /var/log/journal/ec22a001872b47c8ecfc5066e9c1aa9b is 138.454ms for 881 entries. Jan 23 23:56:53.122807 systemd-journald[1577]: System Journal (/var/log/journal/ec22a001872b47c8ecfc5066e9c1aa9b) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:56:53.290554 systemd-journald[1577]: Received client request to flush runtime journal. Jan 23 23:56:53.290659 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:56:53.290712 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:56:53.156387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:56:53.159323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:56:53.180725 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:56:53.207077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:56:53.263919 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:56:53.275818 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:56:53.304705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:56:53.309437 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:56:53.317371 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:56:53.342232 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 23:56:53.357282 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Jan 23 23:56:53.357320 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Jan 23 23:56:53.378865 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:56:53.391342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:56:53.403468 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:56:53.441970 udevadm[1642]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:56:53.481378 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:56:53.579122 kernel: loop3: detected capacity change from 0 to 52536 Jan 23 23:56:53.622493 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:56:53.638650 kernel: loop5: detected capacity change from 0 to 211168 Jan 23 23:56:53.669254 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:56:53.686248 kernel: loop7: detected capacity change from 0 to 52536 Jan 23 23:56:53.699907 (sd-merge)[1646]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:56:53.702000 (sd-merge)[1646]: Merged extensions into '/usr'. Jan 23 23:56:53.711729 systemd[1]: Reloading requested from client PID 1619 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:56:53.711917 systemd[1]: Reloading... Jan 23 23:56:53.854719 zram_generator::config[1672]: No configuration found. Jan 23 23:56:54.238408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:54.363454 systemd[1]: Reloading finished in 650 ms. Jan 23 23:56:54.414247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:56:54.419309 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:56:54.434504 systemd[1]: Starting ensure-sysext.service... Jan 23 23:56:54.439497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:56:54.453740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:56:54.492850 systemd[1]: Reloading requested from client PID 1724 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:56:54.492877 systemd[1]: Reloading... Jan 23 23:56:54.499265 ldconfig[1614]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:56:54.516783 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:56:54.520186 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:56:54.524576 systemd-tmpfiles[1725]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:56:54.525150 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jan 23 23:56:54.527475 systemd-tmpfiles[1725]: ACLs are not supported, ignoring. Jan 23 23:56:54.541865 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:54.541890 systemd-tmpfiles[1725]: Skipping /boot Jan 23 23:56:54.543375 systemd-udevd[1726]: Using default interface naming scheme 'v255'. Jan 23 23:56:54.576703 systemd-tmpfiles[1725]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:56:54.576735 systemd-tmpfiles[1725]: Skipping /boot Jan 23 23:56:54.744233 zram_generator::config[1774]: No configuration found. Jan 23 23:56:54.874406 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:55.131551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:55.148304 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1768) Jan 23 23:56:55.292527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:56:55.293460 systemd[1]: Reloading finished in 799 ms. Jan 23 23:56:55.327616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:56:55.331339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:56:55.336329 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:56:55.382567 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:56:55.400904 systemd[1]: Finished ensure-sysext.service. Jan 23 23:56:55.440083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:56:55.451521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:56:55.460542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:56:55.475558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:56:55.483186 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:56:55.490897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:56:55.500535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:56:55.511510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:56:55.517702 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:56:55.520326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:56:55.523514 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:56:55.533441 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:56:55.544865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:56:55.556481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:56:55.559396 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:56:55.567118 lvm[1932]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:55.567650 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:56:55.577011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:56:55.582580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:56:55.582906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:56:55.627598 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:56:55.627927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:56:55.631034 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:56:55.638848 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:56:55.642112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:56:55.645786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:56:55.650903 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:56:55.651217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:56:55.682738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:56:55.719007 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:56:55.724664 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:56:55.729610 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:56:55.738479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:56:55.749497 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:56:55.761562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:56:55.766030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:56:55.786020 augenrules[1963]: No rules Jan 23 23:56:55.794842 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:56:55.806959 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:56:55.810269 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:56:55.822256 lvm[1961]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:56:55.835018 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:56:55.862458 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:56:55.871328 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:56:55.953089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:56:55.994767 systemd-networkd[1943]: lo: Link UP Jan 23 23:56:55.995282 systemd-networkd[1943]: lo: Gained carrier Jan 23 23:56:55.998390 systemd-networkd[1943]: Enumeration completed Jan 23 23:56:55.998777 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:56:56.004892 systemd-networkd[1943]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:56.005121 systemd-networkd[1943]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:56:56.006421 systemd-resolved[1944]: Positive Trust Anchors: Jan 23 23:56:56.006476 systemd-resolved[1944]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:56:56.006543 systemd-resolved[1944]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:56:56.007728 systemd-networkd[1943]: eth0: Link UP Jan 23 23:56:56.008041 systemd-networkd[1943]: eth0: Gained carrier Jan 23 23:56:56.008076 systemd-networkd[1943]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:56:56.008501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:56:56.024942 systemd-networkd[1943]: eth0: DHCPv4 address 172.31.22.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:56:56.033227 systemd-resolved[1944]: Defaulting to hostname 'linux'. Jan 23 23:56:56.037785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:56:56.040582 systemd[1]: Reached target network.target - Network. Jan 23 23:56:56.042921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:56:56.045688 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:56:56.048352 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:56:56.051308 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:56:56.054530 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:56:56.057436 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:56:56.060658 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:56:56.063552 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:56:56.063607 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:56:56.065724 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:56:56.069400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:56:56.076858 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:56:56.088459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:56:56.091786 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:56:56.095417 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:56:56.097792 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:56:56.100059 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:56.100119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:56:56.108471 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:56:56.118515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:56:56.127370 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:56:56.134500 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:56:56.153812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:56:56.157752 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:56:56.168071 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:56:56.170566 jq[1990]: false Jan 23 23:56:56.176096 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:56:56.199685 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:56:56.206538 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:56:56.216165 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:56:56.236848 extend-filesystems[1991]: Found loop4 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found loop5 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found loop6 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found loop7 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p1 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p2 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p3 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found usr Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p4 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p6 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p7 Jan 23 23:56:56.236848 extend-filesystems[1991]: Found nvme0n1p9 Jan 23 23:56:56.236848 extend-filesystems[1991]: Checking size of /dev/nvme0n1p9 Jan 23 23:56:56.229803 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:56:56.310575 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:56:56.310681 extend-filesystems[1991]: Resized partition /dev/nvme0n1p9 Jan 23 23:56:56.233900 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:56:56.317587 extend-filesystems[2008]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:56:56.235810 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:56:56.333596 jq[2004]: true Jan 23 23:56:56.243537 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:56:56.252363 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:56:56.273158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:56:56.273560 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:56:56.346926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:56:56.347479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:56:56.377376 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: ---------------------------------------------------- Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: corporation. Support and training for ntp-4 are Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: available at https://www.nwtime.org/support Jan 23 23:56:56.381059 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: ---------------------------------------------------- Jan 23 23:56:56.377470 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:56:56.377494 ntpd[1993]: ---------------------------------------------------- Jan 23 23:56:56.377514 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:56:56.377535 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:56:56.377553 ntpd[1993]: corporation. Support and training for ntp-4 are Jan 23 23:56:56.377572 ntpd[1993]: available at https://www.nwtime.org/support Jan 23 23:56:56.377592 ntpd[1993]: ---------------------------------------------------- Jan 23 23:56:56.397468 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: proto: precision = 0.096 usec (-23) Jan 23 23:56:56.397468 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: basedate set to 2026-01-11 Jan 23 23:56:56.397468 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:56.384339 ntpd[1993]: proto: precision = 0.096 usec (-23) Jan 23 23:56:56.388722 ntpd[1993]: basedate set to 2026-01-11 Jan 23 23:56:56.388755 ntpd[1993]: gps base set to 2026-01-11 (week 2401) Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listen normally on 3 eth0 172.31.22.24:123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: bind(21) AF_INET6 fe80::434:ceff:fe4b:4c27%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: unable to create socket on eth0 (5) for fe80::434:ceff:fe4b:4c27%2#123 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: failed to init interface for address fe80::434:ceff:fe4b:4c27%2 Jan 23 23:56:56.413008 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Jan 23 23:56:56.405053 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:56:56.405136 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:56:56.412491 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:56:56.412568 ntpd[1993]: Listen normally on 3 eth0 172.31.22.24:123 Jan 23 23:56:56.412641 ntpd[1993]: Listen normally on 4 lo [::1]:123 Jan 23 23:56:56.412720 ntpd[1993]: bind(21) AF_INET6 fe80::434:ceff:fe4b:4c27%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:56.412761 ntpd[1993]: unable to create socket on eth0 (5) for fe80::434:ceff:fe4b:4c27%2#123 Jan 23 23:56:56.412789 ntpd[1993]: failed to init interface for address fe80::434:ceff:fe4b:4c27%2 Jan 23 23:56:56.412851 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Jan 23 23:56:56.430923 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:56:56.445117 jq[2014]: true Jan 23 23:56:56.454791 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:56:56.466145 extend-filesystems[2008]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:56:56.466145 extend-filesystems[2008]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:56:56.466145 extend-filesystems[2008]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:56:56.458364 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:56:56.489547 update_engine[2002]: I20260123 23:56:56.487349 2002 main.cc:92] Flatcar Update Engine starting Jan 23 23:56:56.489929 extend-filesystems[1991]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:56:56.492149 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:56.492149 ntpd[1993]: 23 Jan 23:56:56 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:56.476730 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:56.487397 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:56:56.476776 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:56:56.486770 dbus-daemon[1989]: [system] SELinux support is enabled Jan 23 23:56:56.502821 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:56:56.502937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:56:56.510504 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:56:56.510565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:56:56.515272 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:56:56.518297 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:56:56.530670 dbus-daemon[1989]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1943 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:56.537749 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:56:56.543726 (ntainerd)[2025]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:56:56.550517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:56:56.562852 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:56:56.567268 update_engine[2002]: I20260123 23:56:56.566982 2002 update_check_scheduler.cc:74] Next update check in 10m55s Jan 23 23:56:56.568923 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:56:56.582555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:56:56.637217 coreos-metadata[1988]: Jan 23 23:56:56.637 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:56.641829 coreos-metadata[1988]: Jan 23 23:56:56.641 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:56:56.646393 coreos-metadata[1988]: Jan 23 23:56:56.646 INFO Fetch successful Jan 23 23:56:56.646393 coreos-metadata[1988]: Jan 23 23:56:56.646 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:56:56.648060 coreos-metadata[1988]: Jan 23 23:56:56.647 INFO Fetch successful Jan 23 23:56:56.648060 coreos-metadata[1988]: Jan 23 23:56:56.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:56:56.648859 coreos-metadata[1988]: Jan 23 23:56:56.648 INFO Fetch successful Jan 23 23:56:56.648859 coreos-metadata[1988]: Jan 23 23:56:56.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:56:56.659491 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1768) Jan 23 23:56:56.659573 coreos-metadata[1988]: Jan 23 23:56:56.653 INFO Fetch successful Jan 23 23:56:56.659573 coreos-metadata[1988]: Jan 23 23:56:56.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:56:56.663266 coreos-metadata[1988]: Jan 23 23:56:56.660 INFO Fetch failed with 404: resource not found Jan 23 23:56:56.663266 coreos-metadata[1988]: Jan 23 23:56:56.660 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:56:56.664084 coreos-metadata[1988]: Jan 23 23:56:56.664 INFO Fetch successful Jan 23 23:56:56.664218 coreos-metadata[1988]: Jan 23 23:56:56.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:56:56.665568 coreos-metadata[1988]: Jan 23 23:56:56.665 INFO Fetch successful Jan 23 23:56:56.665568 coreos-metadata[1988]: Jan 23 23:56:56.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:56:56.666901 coreos-metadata[1988]: Jan 23 23:56:56.666 INFO Fetch successful Jan 23 23:56:56.666901 coreos-metadata[1988]: Jan 23 23:56:56.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:56:56.672499 coreos-metadata[1988]: Jan 23 23:56:56.670 INFO Fetch successful Jan 23 23:56:56.672499 coreos-metadata[1988]: Jan 23 23:56:56.670 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:56:56.673233 coreos-metadata[1988]: Jan 23 23:56:56.673 INFO Fetch successful Jan 23 23:56:56.713241 bash[2073]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:56.739996 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:56:56.777946 systemd[1]: Starting sshkeys.service... Jan 23 23:56:56.814518 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:56:56.814569 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:56:56.814917 systemd-logind[2000]: New seat seat0. Jan 23 23:56:56.821459 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:56:56.849707 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:56:56.859064 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:56:56.870175 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:56:56.907309 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:56:57.050052 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:56:57.050308 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:56:57.065733 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2045 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:56:57.084693 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:56:57.143306 polkitd[2142]: Started polkitd version 121 Jan 23 23:56:57.168552 polkitd[2142]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:56:57.168675 polkitd[2142]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:56:57.174145 polkitd[2142]: Finished loading, compiling and executing 2 rules Jan 23 23:56:57.175059 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:56:57.175368 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:56:57.180139 polkitd[2142]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:56:57.217304 coreos-metadata[2102]: Jan 23 23:56:57.217 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:56:57.221470 coreos-metadata[2102]: Jan 23 23:56:57.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:56:57.241226 coreos-metadata[2102]: Jan 23 23:56:57.234 INFO Fetch successful Jan 23 23:56:57.241226 coreos-metadata[2102]: Jan 23 23:56:57.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:56:57.241226 coreos-metadata[2102]: Jan 23 23:56:57.235 INFO Fetch successful Jan 23 23:56:57.248666 unknown[2102]: wrote ssh authorized keys file for user: core Jan 23 23:56:57.287117 systemd-hostnamed[2045]: Hostname set to (transient) Jan 23 23:56:57.287318 systemd-resolved[1944]: System hostname changed to 'ip-172-31-22-24'. Jan 23 23:56:57.302555 locksmithd[2048]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:56:57.342623 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:56:57.344385 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:56:57.357275 systemd[1]: Finished sshkeys.service. Jan 23 23:56:57.378576 ntpd[1993]: bind(24) AF_INET6 fe80::434:ceff:fe4b:4c27%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:57.378646 ntpd[1993]: unable to create socket on eth0 (6) for fe80::434:ceff:fe4b:4c27%2#123 Jan 23 23:56:57.380342 ntpd[1993]: 23 Jan 23:56:57 ntpd[1993]: bind(24) AF_INET6 fe80::434:ceff:fe4b:4c27%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:56:57.380342 ntpd[1993]: 23 Jan 23:56:57 ntpd[1993]: unable to create socket on eth0 (6) for fe80::434:ceff:fe4b:4c27%2#123 Jan 23 23:56:57.380342 ntpd[1993]: 23 Jan 23:56:57 ntpd[1993]: failed to init interface for address fe80::434:ceff:fe4b:4c27%2 Jan 23 23:56:57.380481 containerd[2025]: time="2026-01-23T23:56:57.379873341Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:56:57.378677 ntpd[1993]: failed to init interface for address fe80::434:ceff:fe4b:4c27%2 Jan 23 23:56:57.436361 containerd[2025]: time="2026-01-23T23:56:57.436243294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.438913426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.438983038Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439030330Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439368142Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439401622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439532314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439561738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439857190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439891306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439920838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440229 containerd[2025]: time="2026-01-23T23:56:57.439966198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440770 containerd[2025]: time="2026-01-23T23:56:57.440125954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440770 containerd[2025]: time="2026-01-23T23:56:57.440564494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440913 containerd[2025]: time="2026-01-23T23:56:57.440781202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:56:57.440913 containerd[2025]: time="2026-01-23T23:56:57.440814886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:56:57.441135 containerd[2025]: time="2026-01-23T23:56:57.440995738Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:56:57.441222 containerd[2025]: time="2026-01-23T23:56:57.441179098Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451247722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451342054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451380274Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451414606Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451447786Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.451697386Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:56:57.452153 containerd[2025]: time="2026-01-23T23:56:57.452119954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452344714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452378914Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452413006Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452446066Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452484094Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452515618Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452553 containerd[2025]: time="2026-01-23T23:56:57.452546698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452578378Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452612350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452641702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452668438Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452706886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452739058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452774494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452804818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.452838 containerd[2025]: time="2026-01-23T23:56:57.452834110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.452865478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.452913226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.452947618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.452977954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453011110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453041926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453070834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453099082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453133186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:56:57.453226 containerd[2025]: time="2026-01-23T23:56:57.453174802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453713 containerd[2025]: time="2026-01-23T23:56:57.453306694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.453713 containerd[2025]: time="2026-01-23T23:56:57.453338446Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:56:57.453713 containerd[2025]: time="2026-01-23T23:56:57.453596434Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.453911446Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.453954190Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.453987514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.454012006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.454044826Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.454068070Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:56:57.456149 containerd[2025]: time="2026-01-23T23:56:57.454095706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.454738918Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.454853674Z" level=info msg="Connect containerd service" Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.454900270Z" level=info msg="using legacy CRI server" Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.454918030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.455079970Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:56:57.456593 containerd[2025]: time="2026-01-23T23:56:57.456404158Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456708646Z" level=info msg="Start subscribing containerd event" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456784594Z" level=info msg="Start recovering state" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456897850Z" level=info msg="Start event monitor" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456921730Z" level=info msg="Start snapshots syncer" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456941818Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:56:57.457023 containerd[2025]: time="2026-01-23T23:56:57.456960022Z" level=info msg="Start streaming server" Jan 23 23:56:57.460746 containerd[2025]: time="2026-01-23T23:56:57.458235970Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:56:57.460746 containerd[2025]: time="2026-01-23T23:56:57.458344978Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:56:57.460746 containerd[2025]: time="2026-01-23T23:56:57.458462086Z" level=info msg="containerd successfully booted in 0.080971s" Jan 23 23:56:57.458580 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:56:57.751370 systemd-networkd[1943]: eth0: Gained IPv6LL Jan 23 23:56:57.757833 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:56:57.763124 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:56:57.779976 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:56:57.786814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:57.802789 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:56:57.856189 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:56:57.919413 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:56:57.926388 amazon-ssm-agent[2190]: Initializing new seelog logger Jan 23 23:56:57.926927 amazon-ssm-agent[2190]: New Seelog Logger Creation Complete Jan 23 23:56:57.926927 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.926927 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 processing appconfig overrides Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 processing appconfig overrides Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.929364 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 processing appconfig overrides Jan 23 23:56:57.929706 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO Proxy environment variables: Jan 23 23:56:57.935628 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.935628 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:56:57.935903 amazon-ssm-agent[2190]: 2026/01/23 23:56:57 processing appconfig overrides Jan 23 23:56:58.029070 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO https_proxy: Jan 23 23:56:58.109715 sshd_keygen[2046]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:56:58.130411 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO http_proxy: Jan 23 23:56:58.183324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:56:58.199674 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:56:58.216808 systemd[1]: Started sshd@0-172.31.22.24:22-4.153.228.146:53454.service - OpenSSH per-connection server daemon (4.153.228.146:53454). Jan 23 23:56:58.229244 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO no_proxy: Jan 23 23:56:58.240777 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:56:58.241782 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:56:58.257765 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:56:58.304325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:56:58.318745 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:56:58.330942 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:56:58.328811 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:56:58.331769 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:56:58.425876 amazon-ssm-agent[2190]: 2026-01-23 23:56:57 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:56:58.525378 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO Agent will take identity from EC2 Jan 23 23:56:58.626155 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:58.653068 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:58.653068 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [Registrar] Starting registrar module Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:56:58.654172 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:56:58.724700 amazon-ssm-agent[2190]: 2026-01-23 23:56:58 INFO [CredentialRefresher] Next credential rotation will be in 30.666655543166666 minutes Jan 23 23:56:58.765327 sshd[2216]: Accepted publickey for core from 4.153.228.146 port 53454 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:58.767750 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:58.786647 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:56:58.804640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:56:58.814399 systemd-logind[2000]: New session 1 of user core. Jan 23 23:56:58.837641 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:56:58.849702 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:56:58.868042 (systemd)[2229]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:56:59.115856 systemd[2229]: Queued start job for default target default.target. Jan 23 23:56:59.123481 systemd[2229]: Created slice app.slice - User Application Slice. Jan 23 23:56:59.123716 systemd[2229]: Reached target paths.target - Paths. Jan 23 23:56:59.123753 systemd[2229]: Reached target timers.target - Timers. Jan 23 23:56:59.128454 systemd[2229]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:56:59.155531 systemd[2229]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:56:59.155814 systemd[2229]: Reached target sockets.target - Sockets. Jan 23 23:56:59.155861 systemd[2229]: Reached target basic.target - Basic System. Jan 23 23:56:59.155969 systemd[2229]: Reached target default.target - Main User Target. Jan 23 23:56:59.156036 systemd[2229]: Startup finished in 274ms. Jan 23 23:56:59.156187 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:56:59.166505 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:56:59.552738 systemd[1]: Started sshd@1-172.31.22.24:22-4.153.228.146:53462.service - OpenSSH per-connection server daemon (4.153.228.146:53462). Jan 23 23:56:59.687519 amazon-ssm-agent[2190]: 2026-01-23 23:56:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:56:59.788184 amazon-ssm-agent[2190]: 2026-01-23 23:56:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2243) started Jan 23 23:56:59.889004 amazon-ssm-agent[2190]: 2026-01-23 23:56:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:57:00.101725 sshd[2240]: Accepted publickey for core from 4.153.228.146 port 53462 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:00.104541 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:00.119619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:00.120471 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:57:00.124455 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:57:00.130734 systemd[1]: Startup finished in 1.173s (kernel) + 8.228s (initrd) + 9.136s (userspace) = 18.538s. Jan 23 23:57:00.139330 systemd-logind[2000]: New session 2 of user core. Jan 23 23:57:00.142885 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:57:00.378553 ntpd[1993]: Listen normally on 7 eth0 [fe80::434:ceff:fe4b:4c27%2]:123 Jan 23 23:57:00.379010 ntpd[1993]: 23 Jan 23:57:00 ntpd[1993]: Listen normally on 7 eth0 [fe80::434:ceff:fe4b:4c27%2]:123 Jan 23 23:57:00.490034 sshd[2240]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:00.498170 systemd[1]: sshd@1-172.31.22.24:22-4.153.228.146:53462.service: Deactivated successfully. Jan 23 23:57:00.502345 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:57:00.504649 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:57:00.506809 systemd-logind[2000]: Removed session 2. Jan 23 23:57:00.584733 systemd[1]: Started sshd@2-172.31.22.24:22-4.153.228.146:53468.service - OpenSSH per-connection server daemon (4.153.228.146:53468). Jan 23 23:57:01.089570 sshd[2271]: Accepted publickey for core from 4.153.228.146 port 53468 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:01.092956 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:01.103319 systemd-logind[2000]: New session 3 of user core. Jan 23 23:57:01.109488 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:57:01.289551 kubelet[2257]: E0123 23:57:01.289472 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:57:01.294426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:57:01.294775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:57:01.297274 systemd[1]: kubelet.service: Consumed 1.392s CPU time. Jan 23 23:57:01.441527 sshd[2271]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:01.448429 systemd[1]: sshd@2-172.31.22.24:22-4.153.228.146:53468.service: Deactivated successfully. Jan 23 23:57:01.452009 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:57:01.454168 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:57:01.456732 systemd-logind[2000]: Removed session 3. Jan 23 23:57:01.549745 systemd[1]: Started sshd@3-172.31.22.24:22-4.153.228.146:53484.service - OpenSSH per-connection server daemon (4.153.228.146:53484). Jan 23 23:57:02.086055 sshd[2280]: Accepted publickey for core from 4.153.228.146 port 53484 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:02.088725 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:02.097165 systemd-logind[2000]: New session 4 of user core. Jan 23 23:57:02.103485 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:57:02.465888 sshd[2280]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:02.472146 systemd[1]: sshd@3-172.31.22.24:22-4.153.228.146:53484.service: Deactivated successfully. Jan 23 23:57:02.475078 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:57:02.479622 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:57:02.481720 systemd-logind[2000]: Removed session 4. Jan 23 23:57:02.571696 systemd[1]: Started sshd@4-172.31.22.24:22-4.153.228.146:53492.service - OpenSSH per-connection server daemon (4.153.228.146:53492). Jan 23 23:57:03.106043 sshd[2288]: Accepted publickey for core from 4.153.228.146 port 53492 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:03.108847 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:03.117916 systemd-logind[2000]: New session 5 of user core. Jan 23 23:57:03.123507 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:57:03.044996 systemd-resolved[1944]: Clock change detected. Flushing caches. Jan 23 23:57:03.055652 systemd-journald[1577]: Time jumped backwards, rotating. Jan 23 23:57:03.088592 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:57:03.089235 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:57:03.105017 sudo[2292]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:03.190137 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:03.197650 systemd[1]: sshd@4-172.31.22.24:22-4.153.228.146:53492.service: Deactivated successfully. Jan 23 23:57:03.201534 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:57:03.203140 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:57:03.205034 systemd-logind[2000]: Removed session 5. Jan 23 23:57:03.278931 systemd[1]: Started sshd@5-172.31.22.24:22-4.153.228.146:53498.service - OpenSSH per-connection server daemon (4.153.228.146:53498). Jan 23 23:57:03.769010 sshd[2297]: Accepted publickey for core from 4.153.228.146 port 53498 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:03.771763 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:03.780389 systemd-logind[2000]: New session 6 of user core. Jan 23 23:57:03.786715 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:57:04.047163 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:57:04.048367 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:57:04.054480 sudo[2301]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:04.064938 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:57:04.066122 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:57:04.087556 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:57:04.099545 auditctl[2304]: No rules Jan 23 23:57:04.101906 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:57:04.103512 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:57:04.111649 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:57:04.163458 augenrules[2322]: No rules Jan 23 23:57:04.165924 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:57:04.168324 sudo[2300]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:04.245255 sshd[2297]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:04.252401 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:57:04.254605 systemd[1]: sshd@5-172.31.22.24:22-4.153.228.146:53498.service: Deactivated successfully. Jan 23 23:57:04.257612 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:57:04.259143 systemd-logind[2000]: Removed session 6. Jan 23 23:57:04.342904 systemd[1]: Started sshd@6-172.31.22.24:22-4.153.228.146:53506.service - OpenSSH per-connection server daemon (4.153.228.146:53506). Jan 23 23:57:04.830757 sshd[2330]: Accepted publickey for core from 4.153.228.146 port 53506 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:04.833477 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:04.840283 systemd-logind[2000]: New session 7 of user core. Jan 23 23:57:04.849668 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:57:05.108389 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:57:05.109766 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:57:06.301185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:06.302180 systemd[1]: kubelet.service: Consumed 1.392s CPU time. Jan 23 23:57:06.309984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:06.362527 systemd[1]: Reloading requested from client PID 2369 ('systemctl') (unit session-7.scope)... Jan 23 23:57:06.362553 systemd[1]: Reloading... Jan 23 23:57:06.602474 zram_generator::config[2410]: No configuration found. Jan 23 23:57:06.847087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:57:07.022768 systemd[1]: Reloading finished in 659 ms. Jan 23 23:57:07.109453 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:57:07.109766 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:57:07.110266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:07.122047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:57:07.432276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:57:07.452864 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:57:07.521046 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:07.521046 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:57:07.521046 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:57:07.521609 kubelet[2471]: I0123 23:57:07.521117 2471 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:57:09.037361 kubelet[2471]: I0123 23:57:09.037312 2471 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:57:09.039412 kubelet[2471]: I0123 23:57:09.037949 2471 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:57:09.039412 kubelet[2471]: I0123 23:57:09.038359 2471 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:57:09.080250 kubelet[2471]: I0123 23:57:09.080191 2471 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:57:09.097800 kubelet[2471]: E0123 23:57:09.097741 2471 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:57:09.097964 kubelet[2471]: I0123 23:57:09.097942 2471 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:57:09.102381 kubelet[2471]: I0123 23:57:09.102341 2471 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:57:09.103098 kubelet[2471]: I0123 23:57:09.103057 2471 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:57:09.103439 kubelet[2471]: I0123 23:57:09.103181 2471 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.22.24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:57:09.103831 kubelet[2471]: I0123 23:57:09.103807 2471 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:57:09.103933 kubelet[2471]: I0123 23:57:09.103914 2471 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:57:09.104367 kubelet[2471]: I0123 23:57:09.104342 2471 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:09.110497 kubelet[2471]: I0123 23:57:09.110460 2471 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:57:09.110684 kubelet[2471]: I0123 23:57:09.110664 2471 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:57:09.113672 kubelet[2471]: I0123 23:57:09.113641 2471 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:57:09.116098 kubelet[2471]: I0123 23:57:09.116070 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:57:09.117522 kubelet[2471]: E0123 23:57:09.117488 2471 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:09.118310 kubelet[2471]: E0123 23:57:09.118253 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:09.118488 kubelet[2471]: I0123 23:57:09.118465 2471 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:57:09.119696 kubelet[2471]: I0123 23:57:09.119662 2471 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:57:09.119922 kubelet[2471]: W0123 23:57:09.119896 2471 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:57:09.126228 kubelet[2471]: I0123 23:57:09.126175 2471 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:57:09.126341 kubelet[2471]: I0123 23:57:09.126264 2471 server.go:1289] "Started kubelet" Jan 23 23:57:09.126531 kubelet[2471]: I0123 23:57:09.126480 2471 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:57:09.127314 kubelet[2471]: I0123 23:57:09.126897 2471 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:57:09.131476 kubelet[2471]: I0123 23:57:09.131405 2471 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:57:09.137955 kubelet[2471]: I0123 23:57:09.137915 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:57:09.142203 kubelet[2471]: I0123 23:57:09.142116 2471 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:57:09.147525 kubelet[2471]: I0123 23:57:09.146141 2471 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:57:09.149326 kubelet[2471]: I0123 23:57:09.149289 2471 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:57:09.149804 kubelet[2471]: E0123 23:57:09.149777 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.151888 kubelet[2471]: E0123 23:57:09.149477 2471 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.24.188d81833333d33e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.24,UID:172.31.22.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.22.24,},FirstTimestamp:2026-01-23 23:57:09.12620627 +0000 UTC m=+1.664821930,LastTimestamp:2026-01-23 23:57:09.12620627 +0000 UTC m=+1.664821930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.24,}" Jan 23 23:57:09.153780 kubelet[2471]: E0123 23:57:09.153724 2471 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:57:09.154057 kubelet[2471]: E0123 23:57:09.154004 2471 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"172.31.22.24\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:57:09.156462 kubelet[2471]: I0123 23:57:09.154601 2471 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:57:09.156462 kubelet[2471]: I0123 23:57:09.154776 2471 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:57:09.169716 kubelet[2471]: I0123 23:57:09.169518 2471 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:57:09.169716 kubelet[2471]: I0123 23:57:09.169665 2471 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:57:09.171100 kubelet[2471]: I0123 23:57:09.171055 2471 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:57:09.175990 kubelet[2471]: E0123 23:57:09.175932 2471 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.22.24\" not found" node="172.31.22.24" Jan 23 23:57:09.176837 kubelet[2471]: E0123 23:57:09.176780 2471 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:57:09.209035 kubelet[2471]: I0123 23:57:09.208914 2471 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:57:09.209035 kubelet[2471]: I0123 23:57:09.208965 2471 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:57:09.209035 kubelet[2471]: I0123 23:57:09.208997 2471 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:57:09.215035 kubelet[2471]: I0123 23:57:09.214625 2471 policy_none.go:49] "None policy: Start" Jan 23 23:57:09.215035 kubelet[2471]: I0123 23:57:09.214663 2471 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:57:09.215035 kubelet[2471]: I0123 23:57:09.214686 2471 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:57:09.231497 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:57:09.252468 kubelet[2471]: E0123 23:57:09.251522 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.252323 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:57:09.261726 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:57:09.272048 kubelet[2471]: I0123 23:57:09.271988 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:57:09.275373 kubelet[2471]: E0123 23:57:09.275217 2471 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:57:09.276318 kubelet[2471]: I0123 23:57:09.276282 2471 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:57:09.276546 kubelet[2471]: I0123 23:57:09.276318 2471 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:57:09.277021 kubelet[2471]: I0123 23:57:09.276988 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:57:09.279501 kubelet[2471]: E0123 23:57:09.279381 2471 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:57:09.279706 kubelet[2471]: E0123 23:57:09.279646 2471 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.24\" not found" Jan 23 23:57:09.282978 kubelet[2471]: I0123 23:57:09.282849 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:57:09.282978 kubelet[2471]: I0123 23:57:09.282909 2471 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:57:09.282978 kubelet[2471]: I0123 23:57:09.282939 2471 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:57:09.283489 kubelet[2471]: I0123 23:57:09.283323 2471 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:57:09.283764 kubelet[2471]: E0123 23:57:09.283706 2471 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 23 23:57:09.378625 kubelet[2471]: I0123 23:57:09.377789 2471 kubelet_node_status.go:75] "Attempting to register node" node="172.31.22.24" Jan 23 23:57:09.386612 kubelet[2471]: I0123 23:57:09.386565 2471 kubelet_node_status.go:78] "Successfully registered node" node="172.31.22.24" Jan 23 23:57:09.386786 kubelet[2471]: E0123 23:57:09.386619 2471 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.22.24\": node \"172.31.22.24\" not found" Jan 23 23:57:09.410259 kubelet[2471]: E0123 23:57:09.410211 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.493631 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 23 23:57:09.511409 kubelet[2471]: E0123 23:57:09.511358 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.570583 sshd[2330]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:09.575730 systemd[1]: sshd@6-172.31.22.24:22-4.153.228.146:53506.service: Deactivated successfully. Jan 23 23:57:09.580043 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:57:09.582064 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:57:09.585600 systemd-logind[2000]: Removed session 7. Jan 23 23:57:09.612059 kubelet[2471]: E0123 23:57:09.612004 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.712958 kubelet[2471]: E0123 23:57:09.712887 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.813829 kubelet[2471]: E0123 23:57:09.813771 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:09.914610 kubelet[2471]: E0123 23:57:09.914569 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.015528 kubelet[2471]: E0123 23:57:10.015359 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.041694 kubelet[2471]: I0123 23:57:10.041581 2471 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 23:57:10.042350 kubelet[2471]: I0123 23:57:10.041795 2471 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 23:57:10.042350 kubelet[2471]: I0123 23:57:10.041860 2471 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jan 23 23:57:10.116608 kubelet[2471]: E0123 23:57:10.116521 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.118721 kubelet[2471]: E0123 23:57:10.118681 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:10.217500 kubelet[2471]: E0123 23:57:10.217410 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.318613 kubelet[2471]: E0123 23:57:10.318455 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.419334 kubelet[2471]: E0123 23:57:10.419259 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.22.24\" not found" Jan 23 23:57:10.520537 kubelet[2471]: I0123 23:57:10.520484 2471 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 23 23:57:10.521062 containerd[2025]: time="2026-01-23T23:57:10.520982981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:57:10.522572 kubelet[2471]: I0123 23:57:10.521918 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 23 23:57:11.119115 kubelet[2471]: I0123 23:57:11.119051 2471 apiserver.go:52] "Watching apiserver" Jan 23 23:57:11.119721 kubelet[2471]: E0123 23:57:11.119467 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:11.157330 kubelet[2471]: E0123 23:57:11.156581 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:11.165873 systemd[1]: Created slice kubepods-besteffort-pod98e487ed_fde8_4ea5_91db_de634dcb203c.slice - libcontainer container kubepods-besteffort-pod98e487ed_fde8_4ea5_91db_de634dcb203c.slice. Jan 23 23:57:11.173117 kubelet[2471]: I0123 23:57:11.173060 2471 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:57:11.181377 kubelet[2471]: I0123 23:57:11.180560 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77b991bb-bce8-4211-845f-aa451168631a-socket-dir\") pod \"csi-node-driver-49pfd\" (UID: \"77b991bb-bce8-4211-845f-aa451168631a\") " pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:11.181377 kubelet[2471]: I0123 23:57:11.180678 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/015daaca-0be5-4d67-b5ae-78bc786c2472-lib-modules\") pod \"kube-proxy-cqkwc\" (UID: \"015daaca-0be5-4d67-b5ae-78bc786c2472\") " pod="kube-system/kube-proxy-cqkwc" Jan 23 23:57:11.181377 kubelet[2471]: I0123 23:57:11.180724 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmt27\" (UniqueName: \"kubernetes.io/projected/015daaca-0be5-4d67-b5ae-78bc786c2472-kube-api-access-hmt27\") pod \"kube-proxy-cqkwc\" (UID: \"015daaca-0be5-4d67-b5ae-78bc786c2472\") " pod="kube-system/kube-proxy-cqkwc" Jan 23 23:57:11.181377 kubelet[2471]: I0123 23:57:11.180765 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-cni-log-dir\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181377 kubelet[2471]: I0123 23:57:11.180818 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-cni-net-dir\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181733 kubelet[2471]: I0123 23:57:11.180858 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-lib-modules\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181733 kubelet[2471]: I0123 23:57:11.180892 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-policysync\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181733 kubelet[2471]: I0123 23:57:11.180943 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-var-run-calico\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181733 kubelet[2471]: I0123 23:57:11.181000 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28vl\" (UniqueName: \"kubernetes.io/projected/98e487ed-fde8-4ea5-91db-de634dcb203c-kube-api-access-x28vl\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.181733 kubelet[2471]: I0123 23:57:11.181054 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77b991bb-bce8-4211-845f-aa451168631a-kubelet-dir\") pod \"csi-node-driver-49pfd\" (UID: \"77b991bb-bce8-4211-845f-aa451168631a\") " pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:11.182052 kubelet[2471]: I0123 23:57:11.181093 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/015daaca-0be5-4d67-b5ae-78bc786c2472-kube-proxy\") pod \"kube-proxy-cqkwc\" (UID: \"015daaca-0be5-4d67-b5ae-78bc786c2472\") " pod="kube-system/kube-proxy-cqkwc" Jan 23 23:57:11.182052 kubelet[2471]: I0123 23:57:11.181127 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98e487ed-fde8-4ea5-91db-de634dcb203c-node-certs\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.182052 kubelet[2471]: I0123 23:57:11.181166 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77b991bb-bce8-4211-845f-aa451168631a-registration-dir\") pod \"csi-node-driver-49pfd\" (UID: \"77b991bb-bce8-4211-845f-aa451168631a\") " pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:11.182052 kubelet[2471]: I0123 23:57:11.181218 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77b991bb-bce8-4211-845f-aa451168631a-varrun\") pod \"csi-node-driver-49pfd\" (UID: \"77b991bb-bce8-4211-845f-aa451168631a\") " pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:11.182052 kubelet[2471]: I0123 23:57:11.181254 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnx8x\" (UniqueName: \"kubernetes.io/projected/77b991bb-bce8-4211-845f-aa451168631a-kube-api-access-dnx8x\") pod \"csi-node-driver-49pfd\" (UID: \"77b991bb-bce8-4211-845f-aa451168631a\") " pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:11.182276 kubelet[2471]: I0123 23:57:11.181290 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-cni-bin-dir\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.182276 kubelet[2471]: I0123 23:57:11.181356 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-flexvol-driver-host\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.182276 kubelet[2471]: I0123 23:57:11.181397 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-var-lib-calico\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.182276 kubelet[2471]: I0123 23:57:11.181539 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/015daaca-0be5-4d67-b5ae-78bc786c2472-xtables-lock\") pod \"kube-proxy-cqkwc\" (UID: \"015daaca-0be5-4d67-b5ae-78bc786c2472\") " pod="kube-system/kube-proxy-cqkwc" Jan 23 23:57:11.182276 kubelet[2471]: I0123 23:57:11.181582 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98e487ed-fde8-4ea5-91db-de634dcb203c-tigera-ca-bundle\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.183595 kubelet[2471]: I0123 23:57:11.181646 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e487ed-fde8-4ea5-91db-de634dcb203c-xtables-lock\") pod \"calico-node-xkzlb\" (UID: \"98e487ed-fde8-4ea5-91db-de634dcb203c\") " pod="calico-system/calico-node-xkzlb" Jan 23 23:57:11.189242 systemd[1]: Created slice kubepods-besteffort-pod015daaca_0be5_4d67_b5ae_78bc786c2472.slice - libcontainer container kubepods-besteffort-pod015daaca_0be5_4d67_b5ae_78bc786c2472.slice. Jan 23 23:57:11.285173 kubelet[2471]: E0123 23:57:11.285118 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.285288 kubelet[2471]: W0123 23:57:11.285176 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.285288 kubelet[2471]: E0123 23:57:11.285220 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.287306 kubelet[2471]: E0123 23:57:11.287252 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.287306 kubelet[2471]: W0123 23:57:11.287287 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.287652 kubelet[2471]: E0123 23:57:11.287503 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.289241 kubelet[2471]: E0123 23:57:11.289189 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.289384 kubelet[2471]: W0123 23:57:11.289222 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.289516 kubelet[2471]: E0123 23:57:11.289382 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.290915 kubelet[2471]: E0123 23:57:11.290798 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.290915 kubelet[2471]: W0123 23:57:11.290831 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.291072 kubelet[2471]: E0123 23:57:11.290882 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.292334 kubelet[2471]: E0123 23:57:11.292298 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.292540 kubelet[2471]: W0123 23:57:11.292330 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.292540 kubelet[2471]: E0123 23:57:11.292381 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.292901 kubelet[2471]: E0123 23:57:11.292846 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.292975 kubelet[2471]: W0123 23:57:11.292900 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.292975 kubelet[2471]: E0123 23:57:11.292922 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.293358 kubelet[2471]: E0123 23:57:11.293328 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.293358 kubelet[2471]: W0123 23:57:11.293356 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.293549 kubelet[2471]: E0123 23:57:11.293380 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.293874 kubelet[2471]: E0123 23:57:11.293827 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.293934 kubelet[2471]: W0123 23:57:11.293874 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.293934 kubelet[2471]: E0123 23:57:11.293897 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.294353 kubelet[2471]: E0123 23:57:11.294322 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.294545 kubelet[2471]: W0123 23:57:11.294352 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.294545 kubelet[2471]: E0123 23:57:11.294387 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.294900 kubelet[2471]: E0123 23:57:11.294869 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.294957 kubelet[2471]: W0123 23:57:11.294899 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.295023 kubelet[2471]: E0123 23:57:11.294923 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.295662 kubelet[2471]: E0123 23:57:11.295397 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.295662 kubelet[2471]: W0123 23:57:11.295488 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.295662 kubelet[2471]: E0123 23:57:11.295512 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.296557 kubelet[2471]: E0123 23:57:11.296531 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.296838 kubelet[2471]: W0123 23:57:11.296650 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.296838 kubelet[2471]: E0123 23:57:11.296682 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.297069 kubelet[2471]: E0123 23:57:11.297048 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.297362 kubelet[2471]: W0123 23:57:11.297185 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.297362 kubelet[2471]: E0123 23:57:11.297213 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.297965 kubelet[2471]: E0123 23:57:11.297943 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.298091 kubelet[2471]: W0123 23:57:11.298070 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.298337 kubelet[2471]: E0123 23:57:11.298174 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.298645 kubelet[2471]: E0123 23:57:11.298620 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.298777 kubelet[2471]: W0123 23:57:11.298754 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.298949 kubelet[2471]: E0123 23:57:11.298857 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.299479 kubelet[2471]: E0123 23:57:11.299296 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.299479 kubelet[2471]: W0123 23:57:11.299319 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.299479 kubelet[2471]: E0123 23:57:11.299339 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.299899 kubelet[2471]: E0123 23:57:11.299879 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.300106 kubelet[2471]: W0123 23:57:11.299995 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.300106 kubelet[2471]: E0123 23:57:11.300023 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.300539 kubelet[2471]: E0123 23:57:11.300519 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.300730 kubelet[2471]: W0123 23:57:11.300622 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.300730 kubelet[2471]: E0123 23:57:11.300648 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.301270 kubelet[2471]: E0123 23:57:11.301101 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.301270 kubelet[2471]: W0123 23:57:11.301119 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.301270 kubelet[2471]: E0123 23:57:11.301141 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.301952 kubelet[2471]: E0123 23:57:11.301930 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.302223 kubelet[2471]: W0123 23:57:11.302041 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.302223 kubelet[2471]: E0123 23:57:11.302071 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.302475 kubelet[2471]: E0123 23:57:11.302453 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.302688 kubelet[2471]: W0123 23:57:11.302575 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.302688 kubelet[2471]: E0123 23:57:11.302604 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.303180 kubelet[2471]: E0123 23:57:11.303061 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.303180 kubelet[2471]: W0123 23:57:11.303081 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.303180 kubelet[2471]: E0123 23:57:11.303100 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.303763 kubelet[2471]: E0123 23:57:11.303583 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.303763 kubelet[2471]: W0123 23:57:11.303602 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.303763 kubelet[2471]: E0123 23:57:11.303622 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.304048 kubelet[2471]: E0123 23:57:11.304029 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.305239 kubelet[2471]: W0123 23:57:11.305194 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.305383 kubelet[2471]: E0123 23:57:11.305359 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.305863 kubelet[2471]: E0123 23:57:11.305838 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.306033 kubelet[2471]: W0123 23:57:11.305988 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.306142 kubelet[2471]: E0123 23:57:11.306119 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.306726 kubelet[2471]: E0123 23:57:11.306699 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.306866 kubelet[2471]: W0123 23:57:11.306839 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.306984 kubelet[2471]: E0123 23:57:11.306959 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.307692 kubelet[2471]: E0123 23:57:11.307657 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.307918 kubelet[2471]: W0123 23:57:11.307889 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.308171 kubelet[2471]: E0123 23:57:11.308124 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.312884 kubelet[2471]: E0123 23:57:11.312838 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.314453 kubelet[2471]: W0123 23:57:11.313019 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.314453 kubelet[2471]: E0123 23:57:11.313054 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.316740 kubelet[2471]: E0123 23:57:11.316689 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.316740 kubelet[2471]: W0123 23:57:11.316726 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.316923 kubelet[2471]: E0123 23:57:11.316760 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.318585 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.320208 kubelet[2471]: W0123 23:57:11.318620 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.318655 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.319382 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.320208 kubelet[2471]: W0123 23:57:11.319400 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.319439 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.320387 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.320208 kubelet[2471]: W0123 23:57:11.320410 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.320208 kubelet[2471]: E0123 23:57:11.320462 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.322594 kubelet[2471]: E0123 23:57:11.322549 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.322789 kubelet[2471]: W0123 23:57:11.322762 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.322902 kubelet[2471]: E0123 23:57:11.322878 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.333738 kubelet[2471]: E0123 23:57:11.333398 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.333738 kubelet[2471]: W0123 23:57:11.333470 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.333738 kubelet[2471]: E0123 23:57:11.333502 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.334313 kubelet[2471]: E0123 23:57:11.334283 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.334561 kubelet[2471]: W0123 23:57:11.334533 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.334675 kubelet[2471]: E0123 23:57:11.334652 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.346445 kubelet[2471]: E0123 23:57:11.344627 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.346445 kubelet[2471]: W0123 23:57:11.344661 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.346445 kubelet[2471]: E0123 23:57:11.344691 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.350884 kubelet[2471]: E0123 23:57:11.350820 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:57:11.351315 kubelet[2471]: W0123 23:57:11.350861 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:57:11.351540 kubelet[2471]: E0123 23:57:11.351505 2471 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:57:11.485138 containerd[2025]: time="2026-01-23T23:57:11.485072741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkzlb,Uid:98e487ed-fde8-4ea5-91db-de634dcb203c,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:11.496127 containerd[2025]: time="2026-01-23T23:57:11.495638921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqkwc,Uid:015daaca-0be5-4d67-b5ae-78bc786c2472,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:12.081109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589608738.mount: Deactivated successfully. Jan 23 23:57:12.098001 containerd[2025]: time="2026-01-23T23:57:12.097925320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:12.106542 containerd[2025]: time="2026-01-23T23:57:12.106457057Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:12.109723 containerd[2025]: time="2026-01-23T23:57:12.109659533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:57:12.111635 containerd[2025]: time="2026-01-23T23:57:12.111580049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:57:12.113954 containerd[2025]: time="2026-01-23T23:57:12.113884481Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:12.120612 kubelet[2471]: E0123 23:57:12.120542 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:12.127218 containerd[2025]: time="2026-01-23T23:57:12.127138217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:57:12.129845 containerd[2025]: time="2026-01-23T23:57:12.129775061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 644.541064ms" Jan 23 23:57:12.132708 containerd[2025]: time="2026-01-23T23:57:12.132590501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.794044ms" Jan 23 23:57:12.284122 kubelet[2471]: E0123 23:57:12.284036 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:12.349572 containerd[2025]: time="2026-01-23T23:57:12.348464766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:12.349572 containerd[2025]: time="2026-01-23T23:57:12.348621306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:12.349572 containerd[2025]: time="2026-01-23T23:57:12.348697086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.349572 containerd[2025]: time="2026-01-23T23:57:12.348897006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.369407 containerd[2025]: time="2026-01-23T23:57:12.369163506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:12.369698 containerd[2025]: time="2026-01-23T23:57:12.369363966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:12.369822 containerd[2025]: time="2026-01-23T23:57:12.369672510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.370271 containerd[2025]: time="2026-01-23T23:57:12.370194630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:12.475809 systemd[1]: Started cri-containerd-6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809.scope - libcontainer container 6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809. Jan 23 23:57:12.480735 systemd[1]: Started cri-containerd-df08796b1a7c4fb51c0e7ec43fd29db4ccd8913d89d570f474d12b8fd8c2f171.scope - libcontainer container df08796b1a7c4fb51c0e7ec43fd29db4ccd8913d89d570f474d12b8fd8c2f171. Jan 23 23:57:12.548995 containerd[2025]: time="2026-01-23T23:57:12.548243023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xkzlb,Uid:98e487ed-fde8-4ea5-91db-de634dcb203c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\"" Jan 23 23:57:12.554185 containerd[2025]: time="2026-01-23T23:57:12.553908703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:57:12.555172 containerd[2025]: time="2026-01-23T23:57:12.555115075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqkwc,Uid:015daaca-0be5-4d67-b5ae-78bc786c2472,Namespace:kube-system,Attempt:0,} returns sandbox id \"df08796b1a7c4fb51c0e7ec43fd29db4ccd8913d89d570f474d12b8fd8c2f171\"" Jan 23 23:57:13.121042 kubelet[2471]: E0123 23:57:13.120975 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:13.616780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114196108.mount: Deactivated successfully. Jan 23 23:57:13.738584 containerd[2025]: time="2026-01-23T23:57:13.738504897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:13.740386 containerd[2025]: time="2026-01-23T23:57:13.740317401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 23 23:57:13.742891 containerd[2025]: time="2026-01-23T23:57:13.742817997Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:13.747583 containerd[2025]: time="2026-01-23T23:57:13.747516741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:13.749305 containerd[2025]: time="2026-01-23T23:57:13.749042517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.19441727s" Jan 23 23:57:13.749305 containerd[2025]: time="2026-01-23T23:57:13.749111301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:57:13.752493 containerd[2025]: time="2026-01-23T23:57:13.752405025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:57:13.758290 containerd[2025]: time="2026-01-23T23:57:13.758219745Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:57:13.786056 containerd[2025]: time="2026-01-23T23:57:13.785961057Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8\"" Jan 23 23:57:13.787382 containerd[2025]: time="2026-01-23T23:57:13.787316709Z" level=info msg="StartContainer for \"8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8\"" Jan 23 23:57:13.838133 systemd[1]: Started cri-containerd-8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8.scope - libcontainer container 8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8. Jan 23 23:57:13.901812 containerd[2025]: time="2026-01-23T23:57:13.899473101Z" level=info msg="StartContainer for \"8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8\" returns successfully" Jan 23 23:57:13.928207 systemd[1]: cri-containerd-8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8.scope: Deactivated successfully. Jan 23 23:57:14.007135 containerd[2025]: time="2026-01-23T23:57:14.006846582Z" level=info msg="shim disconnected" id=8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8 namespace=k8s.io Jan 23 23:57:14.007135 containerd[2025]: time="2026-01-23T23:57:14.006993786Z" level=warning msg="cleaning up after shim disconnected" id=8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8 namespace=k8s.io Jan 23 23:57:14.007135 containerd[2025]: time="2026-01-23T23:57:14.007015590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:14.122018 kubelet[2471]: E0123 23:57:14.121934 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:14.284711 kubelet[2471]: E0123 23:57:14.284251 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:14.574969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e68e273066ced80e8a5f2084166ba6f88524b51216d3421f8801d1d76dfd6a8-rootfs.mount: Deactivated successfully. Jan 23 23:57:15.114031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768326585.mount: Deactivated successfully. Jan 23 23:57:15.122649 kubelet[2471]: E0123 23:57:15.122602 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:15.717002 containerd[2025]: time="2026-01-23T23:57:15.716921542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:15.719120 containerd[2025]: time="2026-01-23T23:57:15.719045806Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 23:57:15.720410 containerd[2025]: time="2026-01-23T23:57:15.720340654Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:15.725389 containerd[2025]: time="2026-01-23T23:57:15.725307742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.972525401s" Jan 23 23:57:15.725389 containerd[2025]: time="2026-01-23T23:57:15.725373202Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:57:15.725604 containerd[2025]: time="2026-01-23T23:57:15.723847354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:15.728540 containerd[2025]: time="2026-01-23T23:57:15.728233210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:57:15.732544 containerd[2025]: time="2026-01-23T23:57:15.732333527Z" level=info msg="CreateContainer within sandbox \"df08796b1a7c4fb51c0e7ec43fd29db4ccd8913d89d570f474d12b8fd8c2f171\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:57:15.754622 containerd[2025]: time="2026-01-23T23:57:15.754567103Z" level=info msg="CreateContainer within sandbox \"df08796b1a7c4fb51c0e7ec43fd29db4ccd8913d89d570f474d12b8fd8c2f171\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd\"" Jan 23 23:57:15.755923 containerd[2025]: time="2026-01-23T23:57:15.755825435Z" level=info msg="StartContainer for \"c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd\"" Jan 23 23:57:15.806742 systemd[1]: run-containerd-runc-k8s.io-c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd-runc.s7JOOX.mount: Deactivated successfully. Jan 23 23:57:15.816710 systemd[1]: Started cri-containerd-c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd.scope - libcontainer container c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd. Jan 23 23:57:15.868015 containerd[2025]: time="2026-01-23T23:57:15.867917603Z" level=info msg="StartContainer for \"c2069cf7a531c0623634162cf1803546c4e2abd9f3e46f3d1e7846a7aa3cd0cd\" returns successfully" Jan 23 23:57:16.124589 kubelet[2471]: E0123 23:57:16.123934 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:16.284989 kubelet[2471]: E0123 23:57:16.284508 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:17.124504 kubelet[2471]: E0123 23:57:17.124114 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:18.125055 kubelet[2471]: E0123 23:57:18.124988 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:18.284723 kubelet[2471]: E0123 23:57:18.284588 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:18.448806 containerd[2025]: time="2026-01-23T23:57:18.448652760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:18.450899 containerd[2025]: time="2026-01-23T23:57:18.450830196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:57:18.453520 containerd[2025]: time="2026-01-23T23:57:18.453459924Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:18.462796 containerd[2025]: time="2026-01-23T23:57:18.462695868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:18.464402 containerd[2025]: time="2026-01-23T23:57:18.464331120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.736035678s" Jan 23 23:57:18.464726 containerd[2025]: time="2026-01-23T23:57:18.464589036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:57:18.475279 containerd[2025]: time="2026-01-23T23:57:18.475202712Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:57:18.504262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347246085.mount: Deactivated successfully. Jan 23 23:57:18.510351 containerd[2025]: time="2026-01-23T23:57:18.510273240Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30\"" Jan 23 23:57:18.511336 containerd[2025]: time="2026-01-23T23:57:18.511292832Z" level=info msg="StartContainer for \"0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30\"" Jan 23 23:57:18.573751 systemd[1]: Started cri-containerd-0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30.scope - libcontainer container 0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30. Jan 23 23:57:18.632986 containerd[2025]: time="2026-01-23T23:57:18.632817853Z" level=info msg="StartContainer for \"0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30\" returns successfully" Jan 23 23:57:19.125997 kubelet[2471]: E0123 23:57:19.125930 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:19.364908 kubelet[2471]: I0123 23:57:19.364795 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqkwc" podStartSLOduration=7.195238918 podStartE2EDuration="10.364774393s" podCreationTimestamp="2026-01-23 23:57:09 +0000 UTC" firstStartedPulling="2026-01-23 23:57:12.557608759 +0000 UTC m=+5.096224395" lastFinishedPulling="2026-01-23 23:57:15.727144246 +0000 UTC m=+8.265759870" observedRunningTime="2026-01-23 23:57:16.399023782 +0000 UTC m=+8.937639430" watchObservedRunningTime="2026-01-23 23:57:19.364774393 +0000 UTC m=+11.903390041" Jan 23 23:57:20.020011 containerd[2025]: time="2026-01-23T23:57:20.019946292Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:57:20.024560 systemd[1]: cri-containerd-0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30.scope: Deactivated successfully. Jan 23 23:57:20.035174 kubelet[2471]: I0123 23:57:20.034510 2471 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:57:20.067778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30-rootfs.mount: Deactivated successfully. Jan 23 23:57:20.126164 kubelet[2471]: E0123 23:57:20.126107 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:20.294205 systemd[1]: Created slice kubepods-besteffort-pod77b991bb_bce8_4211_845f_aa451168631a.slice - libcontainer container kubepods-besteffort-pod77b991bb_bce8_4211_845f_aa451168631a.slice. Jan 23 23:57:20.300283 containerd[2025]: time="2026-01-23T23:57:20.299817541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49pfd,Uid:77b991bb-bce8-4211-845f-aa451168631a,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:20.996447 containerd[2025]: time="2026-01-23T23:57:20.994134149Z" level=error msg="Failed to destroy network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:20.997414 containerd[2025]: time="2026-01-23T23:57:20.996995141Z" level=error msg="encountered an error cleaning up failed sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:20.997414 containerd[2025]: time="2026-01-23T23:57:20.997085069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49pfd,Uid:77b991bb-bce8-4211-845f-aa451168631a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:20.997944 kubelet[2471]: E0123 23:57:20.997652 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:20.997944 kubelet[2471]: E0123 23:57:20.997817 2471 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:20.997944 kubelet[2471]: E0123 23:57:20.997880 2471 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-49pfd" Jan 23 23:57:20.998835 kubelet[2471]: E0123 23:57:20.998678 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:20.998857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350-shm.mount: Deactivated successfully. Jan 23 23:57:21.037990 containerd[2025]: time="2026-01-23T23:57:21.037858981Z" level=info msg="shim disconnected" id=0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30 namespace=k8s.io Jan 23 23:57:21.037990 containerd[2025]: time="2026-01-23T23:57:21.037971301Z" level=warning msg="cleaning up after shim disconnected" id=0540612bd9d8bad76fd202f1ea72b85102e277df59b8ac473211dc09480f5f30 namespace=k8s.io Jan 23 23:57:21.037990 containerd[2025]: time="2026-01-23T23:57:21.037992901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:21.126749 kubelet[2471]: E0123 23:57:21.126691 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:21.337547 containerd[2025]: time="2026-01-23T23:57:21.336777470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:57:21.341064 kubelet[2471]: I0123 23:57:21.338281 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:21.341235 containerd[2025]: time="2026-01-23T23:57:21.339664490Z" level=info msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" Jan 23 23:57:21.341235 containerd[2025]: time="2026-01-23T23:57:21.339939998Z" level=info msg="Ensure that sandbox 973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350 in task-service has been cleanup successfully" Jan 23 23:57:21.394516 containerd[2025]: time="2026-01-23T23:57:21.394387311Z" level=error msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" failed" error="failed to destroy network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:21.395019 kubelet[2471]: E0123 23:57:21.394762 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:21.395019 kubelet[2471]: E0123 23:57:21.394848 2471 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350"} Jan 23 23:57:21.395019 kubelet[2471]: E0123 23:57:21.394931 2471 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77b991bb-bce8-4211-845f-aa451168631a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:21.395019 kubelet[2471]: E0123 23:57:21.394970 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77b991bb-bce8-4211-845f-aa451168631a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:22.127121 kubelet[2471]: E0123 23:57:22.127053 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:23.127264 kubelet[2471]: E0123 23:57:23.127183 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:24.128157 kubelet[2471]: E0123 23:57:24.128093 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:24.957279 systemd[1]: Created slice kubepods-besteffort-podb3804be6_e56c_4cfe_b5d4_83624c123948.slice - libcontainer container kubepods-besteffort-podb3804be6_e56c_4cfe_b5d4_83624c123948.slice. Jan 23 23:57:25.079862 kubelet[2471]: I0123 23:57:25.079662 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghbc\" (UniqueName: \"kubernetes.io/projected/b3804be6-e56c-4cfe-b5d4-83624c123948-kube-api-access-bghbc\") pod \"nginx-deployment-7fcdb87857-c8fnj\" (UID: \"b3804be6-e56c-4cfe-b5d4-83624c123948\") " pod="default/nginx-deployment-7fcdb87857-c8fnj" Jan 23 23:57:25.129123 kubelet[2471]: E0123 23:57:25.128918 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:25.265755 containerd[2025]: time="2026-01-23T23:57:25.265363974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c8fnj,Uid:b3804be6-e56c-4cfe-b5d4-83624c123948,Namespace:default,Attempt:0,}" Jan 23 23:57:25.433897 containerd[2025]: time="2026-01-23T23:57:25.433549795Z" level=error msg="Failed to destroy network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:25.437566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153-shm.mount: Deactivated successfully. Jan 23 23:57:25.438615 containerd[2025]: time="2026-01-23T23:57:25.435386359Z" level=error msg="encountered an error cleaning up failed sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:25.440082 containerd[2025]: time="2026-01-23T23:57:25.439068379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c8fnj,Uid:b3804be6-e56c-4cfe-b5d4-83624c123948,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:25.441372 kubelet[2471]: E0123 23:57:25.441256 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:25.441372 kubelet[2471]: E0123 23:57:25.441344 2471 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c8fnj" Jan 23 23:57:25.441592 kubelet[2471]: E0123 23:57:25.441380 2471 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-c8fnj" Jan 23 23:57:25.441592 kubelet[2471]: E0123 23:57:25.441480 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-c8fnj_default(b3804be6-e56c-4cfe-b5d4-83624c123948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-c8fnj_default(b3804be6-e56c-4cfe-b5d4-83624c123948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-c8fnj" podUID="b3804be6-e56c-4cfe-b5d4-83624c123948" Jan 23 23:57:26.130117 kubelet[2471]: E0123 23:57:26.129934 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:26.358433 kubelet[2471]: I0123 23:57:26.356529 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:26.358595 containerd[2025]: time="2026-01-23T23:57:26.357641791Z" level=info msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" Jan 23 23:57:26.358595 containerd[2025]: time="2026-01-23T23:57:26.357949567Z" level=info msg="Ensure that sandbox c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153 in task-service has been cleanup successfully" Jan 23 23:57:26.430578 containerd[2025]: time="2026-01-23T23:57:26.430366364Z" level=error msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" failed" error="failed to destroy network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:26.431019 kubelet[2471]: E0123 23:57:26.430694 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:26.431019 kubelet[2471]: E0123 23:57:26.430767 2471 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153"} Jan 23 23:57:26.431019 kubelet[2471]: E0123 23:57:26.430828 2471 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3804be6-e56c-4cfe-b5d4-83624c123948\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:26.431019 kubelet[2471]: E0123 23:57:26.430872 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3804be6-e56c-4cfe-b5d4-83624c123948\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-c8fnj" podUID="b3804be6-e56c-4cfe-b5d4-83624c123948" Jan 23 23:57:26.992540 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:57:27.131192 kubelet[2471]: E0123 23:57:27.131041 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:27.239241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088415835.mount: Deactivated successfully. Jan 23 23:57:27.294489 containerd[2025]: time="2026-01-23T23:57:27.292770236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:27.294867 containerd[2025]: time="2026-01-23T23:57:27.294814028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:57:27.297458 containerd[2025]: time="2026-01-23T23:57:27.297323204Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:27.304456 containerd[2025]: time="2026-01-23T23:57:27.303142520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:27.304456 containerd[2025]: time="2026-01-23T23:57:27.304368044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.96752443s" Jan 23 23:57:27.304702 containerd[2025]: time="2026-01-23T23:57:27.304410104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:57:27.333010 containerd[2025]: time="2026-01-23T23:57:27.332957348Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:57:27.364558 containerd[2025]: time="2026-01-23T23:57:27.364482368Z" level=info msg="CreateContainer within sandbox \"6f45ff13127071abc727b3ef233a7e0690a11b465983b02e08c7dfa643d70809\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b173c65939a387ba8b7a4eaf461e6c806e6aedc46850ce07921aa86644b78305\"" Jan 23 23:57:27.365635 containerd[2025]: time="2026-01-23T23:57:27.365575532Z" level=info msg="StartContainer for \"b173c65939a387ba8b7a4eaf461e6c806e6aedc46850ce07921aa86644b78305\"" Jan 23 23:57:27.417946 systemd[1]: Started cri-containerd-b173c65939a387ba8b7a4eaf461e6c806e6aedc46850ce07921aa86644b78305.scope - libcontainer container b173c65939a387ba8b7a4eaf461e6c806e6aedc46850ce07921aa86644b78305. Jan 23 23:57:27.486495 containerd[2025]: time="2026-01-23T23:57:27.485625021Z" level=info msg="StartContainer for \"b173c65939a387ba8b7a4eaf461e6c806e6aedc46850ce07921aa86644b78305\" returns successfully" Jan 23 23:57:27.722855 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:57:27.723005 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:57:28.132105 kubelet[2471]: E0123 23:57:28.131918 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:29.113990 kubelet[2471]: E0123 23:57:29.113930 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:29.133842 kubelet[2471]: E0123 23:57:29.133773 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:29.802474 kernel: bpftool[3297]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:57:30.093148 systemd-networkd[1943]: vxlan.calico: Link UP Jan 23 23:57:30.093173 systemd-networkd[1943]: vxlan.calico: Gained carrier Jan 23 23:57:30.099218 (udev-worker)[3092]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:30.134735 kubelet[2471]: E0123 23:57:30.134668 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:30.147582 (udev-worker)[3319]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:31.135998 kubelet[2471]: E0123 23:57:31.135940 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:31.850219 systemd-networkd[1943]: vxlan.calico: Gained IPv6LL Jan 23 23:57:32.136894 kubelet[2471]: E0123 23:57:32.136748 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:33.137565 kubelet[2471]: E0123 23:57:33.137495 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:34.044797 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.127.128:123 Jan 23 23:57:34.045586 ntpd[1993]: 23 Jan 23:57:34 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.127.128:123 Jan 23 23:57:34.045586 ntpd[1993]: 23 Jan 23:57:34 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::6422:42ff:fe0b:c1f%3]:123 Jan 23 23:57:34.044922 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::6422:42ff:fe0b:c1f%3]:123 Jan 23 23:57:34.138659 kubelet[2471]: E0123 23:57:34.138593 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:35.139134 kubelet[2471]: E0123 23:57:35.139069 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:35.285409 containerd[2025]: time="2026-01-23T23:57:35.285210916Z" level=info msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" Jan 23 23:57:35.385921 kubelet[2471]: I0123 23:57:35.385452 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xkzlb" podStartSLOduration=11.632475231 podStartE2EDuration="26.385405252s" podCreationTimestamp="2026-01-23 23:57:09 +0000 UTC" firstStartedPulling="2026-01-23 23:57:12.552831391 +0000 UTC m=+5.091447027" lastFinishedPulling="2026-01-23 23:57:27.305761424 +0000 UTC m=+19.844377048" observedRunningTime="2026-01-23 23:57:28.393255009 +0000 UTC m=+20.931870657" watchObservedRunningTime="2026-01-23 23:57:35.385405252 +0000 UTC m=+27.924020888" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.385 [INFO][3378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.385 [INFO][3378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" iface="eth0" netns="/var/run/netns/cni-5b8942b3-50f4-7f8f-92e7-c5cf9345e124" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.386 [INFO][3378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" iface="eth0" netns="/var/run/netns/cni-5b8942b3-50f4-7f8f-92e7-c5cf9345e124" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.386 [INFO][3378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" iface="eth0" netns="/var/run/netns/cni-5b8942b3-50f4-7f8f-92e7-c5cf9345e124" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.386 [INFO][3378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.386 [INFO][3378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.460 [INFO][3385] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.461 [INFO][3385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.461 [INFO][3385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.479 [WARNING][3385] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.479 [INFO][3385] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.482 [INFO][3385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:35.491300 containerd[2025]: 2026-01-23 23:57:35.488 [INFO][3378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:57:35.492569 containerd[2025]: time="2026-01-23T23:57:35.492154013Z" level=info msg="TearDown network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" successfully" Jan 23 23:57:35.492569 containerd[2025]: time="2026-01-23T23:57:35.492202433Z" level=info msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" returns successfully" Jan 23 23:57:35.496073 containerd[2025]: time="2026-01-23T23:57:35.495528137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49pfd,Uid:77b991bb-bce8-4211-845f-aa451168631a,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:35.498055 systemd[1]: run-netns-cni\x2d5b8942b3\x2d50f4\x2d7f8f\x2d92e7\x2dc5cf9345e124.mount: Deactivated successfully. Jan 23 23:57:35.767835 systemd-networkd[1943]: calic6d640b9b2c: Link UP Jan 23 23:57:35.771197 systemd-networkd[1943]: calic6d640b9b2c: Gained carrier Jan 23 23:57:35.776842 (udev-worker)[3420]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.625 [INFO][3397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.24-k8s-csi--node--driver--49pfd-eth0 csi-node-driver- calico-system 77b991bb-bce8-4211-845f-aa451168631a 1249 0 2026-01-23 23:57:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.22.24 csi-node-driver-49pfd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic6d640b9b2c [] [] }} ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.625 [INFO][3397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.671 [INFO][3412] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" HandleID="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.672 [INFO][3412] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" HandleID="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.22.24", "pod":"csi-node-driver-49pfd", "timestamp":"2026-01-23 23:57:35.671896878 +0000 UTC"}, Hostname:"172.31.22.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.672 [INFO][3412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.672 [INFO][3412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.672 [INFO][3412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.24' Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.689 [INFO][3412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.701 [INFO][3412] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.711 [INFO][3412] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.715 [INFO][3412] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.719 [INFO][3412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.719 [INFO][3412] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.725 [INFO][3412] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.731 [INFO][3412] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.759 [INFO][3412] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.129/26] block=192.168.127.128/26 handle="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.759 [INFO][3412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.129/26] handle="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" host="172.31.22.24" Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.759 [INFO][3412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:35.813152 containerd[2025]: 2026-01-23 23:57:35.759 [INFO][3412] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.129/26] IPv6=[] ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" HandleID="k8s-pod-network.6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.763 [INFO][3397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-csi--node--driver--49pfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77b991bb-bce8-4211-845f-aa451168631a", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"", Pod:"csi-node-driver-49pfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d640b9b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.763 [INFO][3397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.129/32] ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.763 [INFO][3397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6d640b9b2c ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.769 [INFO][3397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.771 [INFO][3397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-csi--node--driver--49pfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77b991bb-bce8-4211-845f-aa451168631a", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae", Pod:"csi-node-driver-49pfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d640b9b2c", MAC:"7a:50:d2:21:46:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:35.814457 containerd[2025]: 2026-01-23 23:57:35.809 [INFO][3397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae" Namespace="calico-system" Pod="csi-node-driver-49pfd" WorkloadEndpoint="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:57:35.844290 containerd[2025]: time="2026-01-23T23:57:35.844153314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:35.844828 containerd[2025]: time="2026-01-23T23:57:35.844340094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:35.844828 containerd[2025]: time="2026-01-23T23:57:35.844403154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:35.844828 containerd[2025]: time="2026-01-23T23:57:35.844627890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:35.885818 systemd[1]: Started cri-containerd-6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae.scope - libcontainer container 6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae. Jan 23 23:57:35.940466 containerd[2025]: time="2026-01-23T23:57:35.940382383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-49pfd,Uid:77b991bb-bce8-4211-845f-aa451168631a,Namespace:calico-system,Attempt:1,} returns sandbox id \"6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae\"" Jan 23 23:57:35.945699 containerd[2025]: time="2026-01-23T23:57:35.945633775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:36.140125 kubelet[2471]: E0123 23:57:36.139980 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:36.226940 containerd[2025]: time="2026-01-23T23:57:36.226661668Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:36.229171 containerd[2025]: time="2026-01-23T23:57:36.229029340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:36.229171 containerd[2025]: time="2026-01-23T23:57:36.229126780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:36.229408 kubelet[2471]: E0123 23:57:36.229326 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:36.229408 kubelet[2471]: E0123 23:57:36.229395 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:36.229861 kubelet[2471]: E0123 23:57:36.229758 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:36.233134 containerd[2025]: time="2026-01-23T23:57:36.232828336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:36.493798 containerd[2025]: time="2026-01-23T23:57:36.493584114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:36.495333 systemd[1]: run-containerd-runc-k8s.io-6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae-runc.Athfxp.mount: Deactivated successfully. Jan 23 23:57:36.496037 containerd[2025]: time="2026-01-23T23:57:36.495948822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:36.496198 containerd[2025]: time="2026-01-23T23:57:36.496096866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:36.496660 kubelet[2471]: E0123 23:57:36.496596 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:36.496769 kubelet[2471]: E0123 23:57:36.496667 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:36.497525 kubelet[2471]: E0123 23:57:36.496835 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:36.498474 kubelet[2471]: E0123 23:57:36.498390 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:37.034114 systemd-networkd[1943]: calic6d640b9b2c: Gained IPv6LL Jan 23 23:57:37.141007 kubelet[2471]: E0123 23:57:37.140928 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:37.390745 kubelet[2471]: E0123 23:57:37.390565 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:38.141590 kubelet[2471]: E0123 23:57:38.141521 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:39.044866 ntpd[1993]: Listen normally on 10 calic6d640b9b2c [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 23:57:39.045543 ntpd[1993]: 23 Jan 23:57:39 ntpd[1993]: Listen normally on 10 calic6d640b9b2c [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 23:57:39.142048 kubelet[2471]: E0123 23:57:39.141976 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:40.142415 kubelet[2471]: E0123 23:57:40.142366 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:40.285344 containerd[2025]: time="2026-01-23T23:57:40.284900720Z" level=info msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.359 [INFO][3485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.360 [INFO][3485] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" iface="eth0" netns="/var/run/netns/cni-343324b8-b1b0-3720-d5da-0585de4e0339" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.360 [INFO][3485] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" iface="eth0" netns="/var/run/netns/cni-343324b8-b1b0-3720-d5da-0585de4e0339" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.360 [INFO][3485] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" iface="eth0" netns="/var/run/netns/cni-343324b8-b1b0-3720-d5da-0585de4e0339" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.361 [INFO][3485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.361 [INFO][3485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.398 [INFO][3492] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.399 [INFO][3492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.399 [INFO][3492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.425 [WARNING][3492] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.425 [INFO][3492] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.430 [INFO][3492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:40.435321 containerd[2025]: 2026-01-23 23:57:40.432 [INFO][3485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:57:40.438619 containerd[2025]: time="2026-01-23T23:57:40.438546093Z" level=info msg="TearDown network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" successfully" Jan 23 23:57:40.438619 containerd[2025]: time="2026-01-23T23:57:40.438601545Z" level=info msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" returns successfully" Jan 23 23:57:40.441012 containerd[2025]: time="2026-01-23T23:57:40.439443045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c8fnj,Uid:b3804be6-e56c-4cfe-b5d4-83624c123948,Namespace:default,Attempt:1,}" Jan 23 23:57:40.443310 systemd[1]: run-netns-cni\x2d343324b8\x2db1b0\x2d3720\x2dd5da\x2d0585de4e0339.mount: Deactivated successfully. Jan 23 23:57:40.647480 systemd-networkd[1943]: calie80f6fa7547: Link UP Jan 23 23:57:40.647872 systemd-networkd[1943]: calie80f6fa7547: Gained carrier Jan 23 23:57:40.653894 (udev-worker)[3525]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.537 [INFO][3499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0 nginx-deployment-7fcdb87857- default b3804be6-e56c-4cfe-b5d4-83624c123948 1282 0 2026-01-23 23:57:24 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.24 nginx-deployment-7fcdb87857-c8fnj eth0 default [] [] [kns.default ksa.default.default] calie80f6fa7547 [] [] }} ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.537 [INFO][3499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.579 [INFO][3518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" HandleID="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.579 [INFO][3518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" HandleID="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.24", "pod":"nginx-deployment-7fcdb87857-c8fnj", "timestamp":"2026-01-23 23:57:40.57961933 +0000 UTC"}, Hostname:"172.31.22.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.579 [INFO][3518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.580 [INFO][3518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.580 [INFO][3518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.24' Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.594 [INFO][3518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.602 [INFO][3518] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.609 [INFO][3518] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.613 [INFO][3518] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.616 [INFO][3518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.616 [INFO][3518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.619 [INFO][3518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.626 [INFO][3518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.637 [INFO][3518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.130/26] block=192.168.127.128/26 handle="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.638 [INFO][3518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.130/26] handle="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" host="172.31.22.24" Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.638 [INFO][3518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:40.664992 containerd[2025]: 2026-01-23 23:57:40.638 [INFO][3518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.130/26] IPv6=[] ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" HandleID="k8s-pod-network.2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.641 [INFO][3499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"b3804be6-e56c-4cfe-b5d4-83624c123948", ResourceVersion:"1282", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-c8fnj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie80f6fa7547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.642 [INFO][3499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.130/32] ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.642 [INFO][3499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie80f6fa7547 ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.647 [INFO][3499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.649 [INFO][3499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"b3804be6-e56c-4cfe-b5d4-83624c123948", ResourceVersion:"1282", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b", Pod:"nginx-deployment-7fcdb87857-c8fnj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie80f6fa7547", MAC:"2e:34:cd:33:9c:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:40.667760 containerd[2025]: 2026-01-23 23:57:40.660 [INFO][3499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b" Namespace="default" Pod="nginx-deployment-7fcdb87857-c8fnj" WorkloadEndpoint="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:57:40.714479 containerd[2025]: time="2026-01-23T23:57:40.713099183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:40.714479 containerd[2025]: time="2026-01-23T23:57:40.714023531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:40.714479 containerd[2025]: time="2026-01-23T23:57:40.714052439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:40.714479 containerd[2025]: time="2026-01-23T23:57:40.714211523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:40.761772 systemd[1]: Started cri-containerd-2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b.scope - libcontainer container 2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b. Jan 23 23:57:40.822410 containerd[2025]: time="2026-01-23T23:57:40.822348191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-c8fnj,Uid:b3804be6-e56c-4cfe-b5d4-83624c123948,Namespace:default,Attempt:1,} returns sandbox id \"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b\"" Jan 23 23:57:40.825018 containerd[2025]: time="2026-01-23T23:57:40.824308271Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 23:57:41.144350 kubelet[2471]: E0123 23:57:41.144198 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:41.259502 update_engine[2002]: I20260123 23:57:41.259246 2002 update_attempter.cc:509] Updating boot flags... Jan 23 23:57:41.333603 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3588) Jan 23 23:57:41.602694 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3592) Jan 23 23:57:42.144816 kubelet[2471]: E0123 23:57:42.144772 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:42.538091 systemd-networkd[1943]: calie80f6fa7547: Gained IPv6LL Jan 23 23:57:43.145923 kubelet[2471]: E0123 23:57:43.145864 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:43.771338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721690536.mount: Deactivated successfully. Jan 23 23:57:44.146772 kubelet[2471]: E0123 23:57:44.146371 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:45.044775 ntpd[1993]: Listen normally on 11 calie80f6fa7547 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 23:57:45.045376 ntpd[1993]: 23 Jan 23:57:45 ntpd[1993]: Listen normally on 11 calie80f6fa7547 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 23 23:57:45.051943 containerd[2025]: time="2026-01-23T23:57:45.051864768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:45.054792 containerd[2025]: time="2026-01-23T23:57:45.054384432Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=62404643" Jan 23 23:57:45.057618 containerd[2025]: time="2026-01-23T23:57:45.056839704Z" level=info msg="ImageCreate event name:\"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:45.062666 containerd[2025]: time="2026-01-23T23:57:45.062612496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:45.064608 containerd[2025]: time="2026-01-23T23:57:45.064557936Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"62404521\" in 4.240193217s" Jan 23 23:57:45.064756 containerd[2025]: time="2026-01-23T23:57:45.064723728Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\"" Jan 23 23:57:45.072690 containerd[2025]: time="2026-01-23T23:57:45.072629316Z" level=info msg="CreateContainer within sandbox \"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 23 23:57:45.098409 containerd[2025]: time="2026-01-23T23:57:45.098352624Z" level=info msg="CreateContainer within sandbox \"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"29c33049241f084c5382d16efe69e1c1e6f5c330ad9f6a0daaa6fcf67f6c46fc\"" Jan 23 23:57:45.101449 containerd[2025]: time="2026-01-23T23:57:45.099541320Z" level=info msg="StartContainer for \"29c33049241f084c5382d16efe69e1c1e6f5c330ad9f6a0daaa6fcf67f6c46fc\"" Jan 23 23:57:45.150677 kubelet[2471]: E0123 23:57:45.149596 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:45.161756 systemd[1]: Started cri-containerd-29c33049241f084c5382d16efe69e1c1e6f5c330ad9f6a0daaa6fcf67f6c46fc.scope - libcontainer container 29c33049241f084c5382d16efe69e1c1e6f5c330ad9f6a0daaa6fcf67f6c46fc. Jan 23 23:57:45.209260 containerd[2025]: time="2026-01-23T23:57:45.209203981Z" level=info msg="StartContainer for \"29c33049241f084c5382d16efe69e1c1e6f5c330ad9f6a0daaa6fcf67f6c46fc\" returns successfully" Jan 23 23:57:46.151492 kubelet[2471]: E0123 23:57:46.151397 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:47.151849 kubelet[2471]: E0123 23:57:47.151784 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:48.152878 kubelet[2471]: E0123 23:57:48.152813 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:49.114174 kubelet[2471]: E0123 23:57:49.114106 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:49.153173 kubelet[2471]: E0123 23:57:49.153130 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:50.154229 kubelet[2471]: E0123 23:57:50.154166 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:51.154551 kubelet[2471]: E0123 23:57:51.154490 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:51.286994 containerd[2025]: time="2026-01-23T23:57:51.286857487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:51.310206 kubelet[2471]: I0123 23:57:51.310005 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-c8fnj" podStartSLOduration=23.067054346 podStartE2EDuration="27.309985387s" podCreationTimestamp="2026-01-23 23:57:24 +0000 UTC" firstStartedPulling="2026-01-23 23:57:40.823859351 +0000 UTC m=+33.362474975" lastFinishedPulling="2026-01-23 23:57:45.066790392 +0000 UTC m=+37.605406016" observedRunningTime="2026-01-23 23:57:45.428096582 +0000 UTC m=+37.966712218" watchObservedRunningTime="2026-01-23 23:57:51.309985387 +0000 UTC m=+43.848601011" Jan 23 23:57:51.583654 containerd[2025]: time="2026-01-23T23:57:51.583582221Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:51.585915 containerd[2025]: time="2026-01-23T23:57:51.585758001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:51.585915 containerd[2025]: time="2026-01-23T23:57:51.585834261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:51.586255 kubelet[2471]: E0123 23:57:51.586084 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:51.586255 kubelet[2471]: E0123 23:57:51.586144 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:51.586912 kubelet[2471]: E0123 23:57:51.586356 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:51.589305 containerd[2025]: time="2026-01-23T23:57:51.589259301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:51.856694 containerd[2025]: time="2026-01-23T23:57:51.856477294Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:51.858727 containerd[2025]: time="2026-01-23T23:57:51.858652282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:51.858873 containerd[2025]: time="2026-01-23T23:57:51.858796402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:51.859048 kubelet[2471]: E0123 23:57:51.858980 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:51.859155 kubelet[2471]: E0123 23:57:51.859039 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:51.859314 kubelet[2471]: E0123 23:57:51.859206 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:51.860948 kubelet[2471]: E0123 23:57:51.860875 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:57:52.155604 kubelet[2471]: E0123 23:57:52.155452 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:53.156321 kubelet[2471]: E0123 23:57:53.156262 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:54.157312 kubelet[2471]: E0123 23:57:54.157250 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:54.321204 systemd[1]: Created slice kubepods-besteffort-podd4ee4903_4d5d_4b65_8c41_f4b877ef5972.slice - libcontainer container kubepods-besteffort-podd4ee4903_4d5d_4b65_8c41_f4b877ef5972.slice. Jan 23 23:57:54.365950 kubelet[2471]: I0123 23:57:54.365886 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzzq\" (UniqueName: \"kubernetes.io/projected/d4ee4903-4d5d-4b65-8c41-f4b877ef5972-kube-api-access-npzzq\") pod \"nfs-server-provisioner-0\" (UID: \"d4ee4903-4d5d-4b65-8c41-f4b877ef5972\") " pod="default/nfs-server-provisioner-0" Jan 23 23:57:54.366118 kubelet[2471]: I0123 23:57:54.365987 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d4ee4903-4d5d-4b65-8c41-f4b877ef5972-data\") pod \"nfs-server-provisioner-0\" (UID: \"d4ee4903-4d5d-4b65-8c41-f4b877ef5972\") " pod="default/nfs-server-provisioner-0" Jan 23 23:57:54.628081 containerd[2025]: time="2026-01-23T23:57:54.627570888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4ee4903-4d5d-4b65-8c41-f4b877ef5972,Namespace:default,Attempt:0,}" Jan 23 23:57:54.829765 systemd-networkd[1943]: cali60e51b789ff: Link UP Jan 23 23:57:54.831250 systemd-networkd[1943]: cali60e51b789ff: Gained carrier Jan 23 23:57:54.837222 (udev-worker)[3876]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.722 [INFO][3857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.24-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d4ee4903-4d5d-4b65-8c41-f4b877ef5972 1377 0 2026-01-23 23:57:54 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.22.24 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.722 [INFO][3857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.765 [INFO][3869] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" HandleID="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Workload="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.765 [INFO][3869] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" HandleID="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Workload="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.24", "pod":"nfs-server-provisioner-0", "timestamp":"2026-01-23 23:57:54.765461304 +0000 UTC"}, Hostname:"172.31.22.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.765 [INFO][3869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.765 [INFO][3869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.765 [INFO][3869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.24' Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.780 [INFO][3869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.787 [INFO][3869] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.794 [INFO][3869] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.798 [INFO][3869] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.801 [INFO][3869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.801 [INFO][3869] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.804 [INFO][3869] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.812 [INFO][3869] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.821 [INFO][3869] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.131/26] block=192.168.127.128/26 handle="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.821 [INFO][3869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.131/26] handle="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" host="172.31.22.24" Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.821 [INFO][3869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:54.855754 containerd[2025]: 2026-01-23 23:57:54.821 [INFO][3869] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.131/26] IPv6=[] ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" HandleID="k8s-pod-network.37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Workload="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.856904 containerd[2025]: 2026-01-23 23:57:54.825 [INFO][3857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d4ee4903-4d5d-4b65-8c41-f4b877ef5972", ResourceVersion:"1377", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.856904 containerd[2025]: 2026-01-23 23:57:54.825 [INFO][3857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.131/32] ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.856904 containerd[2025]: 2026-01-23 23:57:54.825 [INFO][3857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.856904 containerd[2025]: 2026-01-23 23:57:54.831 [INFO][3857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.857225 containerd[2025]: 2026-01-23 23:57:54.832 [INFO][3857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d4ee4903-4d5d-4b65-8c41-f4b877ef5972", ResourceVersion:"1377", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"be:5b:65:78:45:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.857225 containerd[2025]: 2026-01-23 23:57:54.847 [INFO][3857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.24-k8s-nfs--server--provisioner--0-eth0" Jan 23 23:57:54.892131 containerd[2025]: time="2026-01-23T23:57:54.891656005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:54.892131 containerd[2025]: time="2026-01-23T23:57:54.891760273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:54.892131 containerd[2025]: time="2026-01-23T23:57:54.891797869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:54.894458 containerd[2025]: time="2026-01-23T23:57:54.894347413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:54.940747 systemd[1]: Started cri-containerd-37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad.scope - libcontainer container 37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad. Jan 23 23:57:55.001243 containerd[2025]: time="2026-01-23T23:57:55.001173082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4ee4903-4d5d-4b65-8c41-f4b877ef5972,Namespace:default,Attempt:0,} returns sandbox id \"37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad\"" Jan 23 23:57:55.004165 containerd[2025]: time="2026-01-23T23:57:55.003988906Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 23 23:57:55.158258 kubelet[2471]: E0123 23:57:55.158061 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:56.107086 systemd-networkd[1943]: cali60e51b789ff: Gained IPv6LL Jan 23 23:57:56.159582 kubelet[2471]: E0123 23:57:56.159538 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:57.162720 kubelet[2471]: E0123 23:57:57.162629 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:57.565446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419655624.mount: Deactivated successfully. Jan 23 23:57:58.163688 kubelet[2471]: E0123 23:57:58.163633 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:57:59.044927 ntpd[1993]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:59.045791 ntpd[1993]: 23 Jan 23:57:59 ntpd[1993]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:59.165557 kubelet[2471]: E0123 23:57:59.165503 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:00.166358 kubelet[2471]: E0123 23:58:00.166257 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:00.701293 containerd[2025]: time="2026-01-23T23:58:00.699405186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:00.703576 containerd[2025]: time="2026-01-23T23:58:00.703493898Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 23 23:58:00.705700 containerd[2025]: time="2026-01-23T23:58:00.705654822Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:00.714590 containerd[2025]: time="2026-01-23T23:58:00.714536574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:00.718678 containerd[2025]: time="2026-01-23T23:58:00.718601238Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.714543836s" Jan 23 23:58:00.718678 containerd[2025]: time="2026-01-23T23:58:00.718668570Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 23 23:58:00.728027 containerd[2025]: time="2026-01-23T23:58:00.727847610Z" level=info msg="CreateContainer within sandbox \"37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 23 23:58:00.751096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997692676.mount: Deactivated successfully. Jan 23 23:58:00.760910 containerd[2025]: time="2026-01-23T23:58:00.760847022Z" level=info msg="CreateContainer within sandbox \"37cd9070225813201462324fbc5b4a68e0101ec2ea8656de676deb3864fe82ad\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"155e7ef4c8ba0d8551dadf3953b3d3046c5367492ef7a94962c0b5931dd18830\"" Jan 23 23:58:00.761924 containerd[2025]: time="2026-01-23T23:58:00.761879970Z" level=info msg="StartContainer for \"155e7ef4c8ba0d8551dadf3953b3d3046c5367492ef7a94962c0b5931dd18830\"" Jan 23 23:58:00.825770 systemd[1]: Started cri-containerd-155e7ef4c8ba0d8551dadf3953b3d3046c5367492ef7a94962c0b5931dd18830.scope - libcontainer container 155e7ef4c8ba0d8551dadf3953b3d3046c5367492ef7a94962c0b5931dd18830. Jan 23 23:58:00.872412 containerd[2025]: time="2026-01-23T23:58:00.871992151Z" level=info msg="StartContainer for \"155e7ef4c8ba0d8551dadf3953b3d3046c5367492ef7a94962c0b5931dd18830\" returns successfully" Jan 23 23:58:01.166693 kubelet[2471]: E0123 23:58:01.166631 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:02.167382 kubelet[2471]: E0123 23:58:02.167320 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:03.168286 kubelet[2471]: E0123 23:58:03.168223 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:03.287201 kubelet[2471]: E0123 23:58:03.287109 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:58:03.312136 kubelet[2471]: I0123 23:58:03.311931 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.595740255 podStartE2EDuration="9.311911231s" podCreationTimestamp="2026-01-23 23:57:54 +0000 UTC" firstStartedPulling="2026-01-23 23:57:55.003636862 +0000 UTC m=+47.542252486" lastFinishedPulling="2026-01-23 23:58:00.719807826 +0000 UTC m=+53.258423462" observedRunningTime="2026-01-23 23:58:01.475264026 +0000 UTC m=+54.013879674" watchObservedRunningTime="2026-01-23 23:58:03.311911231 +0000 UTC m=+55.850526867" Jan 23 23:58:04.169467 kubelet[2471]: E0123 23:58:04.169371 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:05.170096 kubelet[2471]: E0123 23:58:05.170019 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:06.171187 kubelet[2471]: E0123 23:58:06.171124 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:07.172148 kubelet[2471]: E0123 23:58:07.172079 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:08.173290 kubelet[2471]: E0123 23:58:08.173228 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:09.114549 kubelet[2471]: E0123 23:58:09.114488 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:09.171924 containerd[2025]: time="2026-01-23T23:58:09.171412884Z" level=info msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" Jan 23 23:58:09.173921 kubelet[2471]: E0123 23:58:09.173861 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.260 [WARNING][4050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"b3804be6-e56c-4cfe-b5d4-83624c123948", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b", Pod:"nginx-deployment-7fcdb87857-c8fnj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie80f6fa7547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.260 [INFO][4050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.260 [INFO][4050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" iface="eth0" netns="" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.260 [INFO][4050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.260 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.308 [INFO][4057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.309 [INFO][4057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.309 [INFO][4057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.324 [WARNING][4057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.324 [INFO][4057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.327 [INFO][4057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:09.332532 containerd[2025]: 2026-01-23 23:58:09.329 [INFO][4050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.333972 containerd[2025]: time="2026-01-23T23:58:09.332496133Z" level=info msg="TearDown network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" successfully" Jan 23 23:58:09.333972 containerd[2025]: time="2026-01-23T23:58:09.332565709Z" level=info msg="StopPodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" returns successfully" Jan 23 23:58:09.333972 containerd[2025]: time="2026-01-23T23:58:09.333861385Z" level=info msg="RemovePodSandbox for \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" Jan 23 23:58:09.333972 containerd[2025]: time="2026-01-23T23:58:09.333931513Z" level=info msg="Forcibly stopping sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\"" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.393 [WARNING][4074] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"b3804be6-e56c-4cfe-b5d4-83624c123948", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"2d0917c5fd1b77f4b426302ea0ff8c99c3f731e72215cd672a1f15c52200285b", Pod:"nginx-deployment-7fcdb87857-c8fnj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie80f6fa7547", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.395 [INFO][4074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.395 [INFO][4074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" iface="eth0" netns="" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.395 [INFO][4074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.395 [INFO][4074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.431 [INFO][4081] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.432 [INFO][4081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.432 [INFO][4081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.456 [WARNING][4081] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.456 [INFO][4081] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" HandleID="k8s-pod-network.c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Workload="172.31.22.24-k8s-nginx--deployment--7fcdb87857--c8fnj-eth0" Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.462 [INFO][4081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:09.475466 containerd[2025]: 2026-01-23 23:58:09.467 [INFO][4074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153" Jan 23 23:58:09.475466 containerd[2025]: time="2026-01-23T23:58:09.474667429Z" level=info msg="TearDown network for sandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" successfully" Jan 23 23:58:09.483487 containerd[2025]: time="2026-01-23T23:58:09.482571806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:09.483487 containerd[2025]: time="2026-01-23T23:58:09.482655914Z" level=info msg="RemovePodSandbox \"c26e84db635a726897c80e9618b0735ba22fcbcd56120f5aa84ebb9e73e36153\" returns successfully" Jan 23 23:58:09.483487 containerd[2025]: time="2026-01-23T23:58:09.483317786Z" level=info msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.567 [WARNING][4097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-csi--node--driver--49pfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77b991bb-bce8-4211-845f-aa451168631a", ResourceVersion:"1440", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae", Pod:"csi-node-driver-49pfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d640b9b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.570 [INFO][4097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.570 [INFO][4097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" iface="eth0" netns="" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.570 [INFO][4097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.570 [INFO][4097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.605 [INFO][4105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.606 [INFO][4105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.606 [INFO][4105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.626 [WARNING][4105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.626 [INFO][4105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.629 [INFO][4105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:09.634929 containerd[2025]: 2026-01-23 23:58:09.632 [INFO][4097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.636284 containerd[2025]: time="2026-01-23T23:58:09.635796578Z" level=info msg="TearDown network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" successfully" Jan 23 23:58:09.636284 containerd[2025]: time="2026-01-23T23:58:09.635839490Z" level=info msg="StopPodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" returns successfully" Jan 23 23:58:09.636741 containerd[2025]: time="2026-01-23T23:58:09.636589838Z" level=info msg="RemovePodSandbox for \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" Jan 23 23:58:09.636741 containerd[2025]: time="2026-01-23T23:58:09.636635150Z" level=info msg="Forcibly stopping sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\"" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.696 [WARNING][4119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-csi--node--driver--49pfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77b991bb-bce8-4211-845f-aa451168631a", ResourceVersion:"1440", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"6826c58fd32ffb2ee2cbc8d5002412f4d62e07fcd94656d83dffc6284f3cebae", Pod:"csi-node-driver-49pfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6d640b9b2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.697 [INFO][4119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.697 [INFO][4119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" iface="eth0" netns="" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.697 [INFO][4119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.697 [INFO][4119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.732 [INFO][4126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.733 [INFO][4126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.733 [INFO][4126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.746 [WARNING][4126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.746 [INFO][4126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" HandleID="k8s-pod-network.973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Workload="172.31.22.24-k8s-csi--node--driver--49pfd-eth0" Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.749 [INFO][4126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:09.754081 containerd[2025]: 2026-01-23 23:58:09.751 [INFO][4119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350" Jan 23 23:58:09.755773 containerd[2025]: time="2026-01-23T23:58:09.754039431Z" level=info msg="TearDown network for sandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" successfully" Jan 23 23:58:09.760791 containerd[2025]: time="2026-01-23T23:58:09.760566315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:09.760791 containerd[2025]: time="2026-01-23T23:58:09.760652499Z" level=info msg="RemovePodSandbox \"973022a9fab101c0b5999f29b1d124d970edb713ada58774d3929aeb8a2f0350\" returns successfully" Jan 23 23:58:10.174924 kubelet[2471]: E0123 23:58:10.174747 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:11.175238 kubelet[2471]: E0123 23:58:11.175177 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:12.175344 kubelet[2471]: E0123 23:58:12.175285 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:13.176241 kubelet[2471]: E0123 23:58:13.176183 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:14.176353 kubelet[2471]: E0123 23:58:14.176299 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:14.286435 containerd[2025]: time="2026-01-23T23:58:14.286373609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:14.589769 containerd[2025]: time="2026-01-23T23:58:14.589567219Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:14.591967 containerd[2025]: time="2026-01-23T23:58:14.591771811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:14.591967 containerd[2025]: time="2026-01-23T23:58:14.591892519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:14.592225 kubelet[2471]: E0123 23:58:14.592137 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:14.592308 kubelet[2471]: E0123 23:58:14.592222 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:14.592918 kubelet[2471]: E0123 23:58:14.592406 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:14.595785 containerd[2025]: time="2026-01-23T23:58:14.595739779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:14.851094 containerd[2025]: time="2026-01-23T23:58:14.850900412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:14.853249 containerd[2025]: time="2026-01-23T23:58:14.853060484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:14.853249 containerd[2025]: time="2026-01-23T23:58:14.853160468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:14.853483 kubelet[2471]: E0123 23:58:14.853349 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:14.853483 kubelet[2471]: E0123 23:58:14.853408 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:14.853688 kubelet[2471]: E0123 23:58:14.853613 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:14.855103 kubelet[2471]: E0123 23:58:14.855038 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:58:15.177414 kubelet[2471]: E0123 23:58:15.177260 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:16.178053 kubelet[2471]: E0123 23:58:16.177987 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:17.178343 kubelet[2471]: E0123 23:58:17.178279 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:18.178785 kubelet[2471]: E0123 23:58:18.178731 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:19.179490 kubelet[2471]: E0123 23:58:19.179395 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:20.180260 kubelet[2471]: E0123 23:58:20.180207 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:21.181333 kubelet[2471]: E0123 23:58:21.181273 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:21.331485 systemd[1]: Created slice kubepods-besteffort-pod4db3e3c7_3d28_4911_9edb_ff21bc86d762.slice - libcontainer container kubepods-besteffort-pod4db3e3c7_3d28_4911_9edb_ff21bc86d762.slice. Jan 23 23:58:21.429312 kubelet[2471]: I0123 23:58:21.428996 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtksd\" (UniqueName: \"kubernetes.io/projected/4db3e3c7-3d28-4911-9edb-ff21bc86d762-kube-api-access-dtksd\") pod \"test-pod-1\" (UID: \"4db3e3c7-3d28-4911-9edb-ff21bc86d762\") " pod="default/test-pod-1" Jan 23 23:58:21.429312 kubelet[2471]: I0123 23:58:21.429073 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7349086d-35eb-431c-bfde-b70d04f48c67\" (UniqueName: \"kubernetes.io/nfs/4db3e3c7-3d28-4911-9edb-ff21bc86d762-pvc-7349086d-35eb-431c-bfde-b70d04f48c67\") pod \"test-pod-1\" (UID: \"4db3e3c7-3d28-4911-9edb-ff21bc86d762\") " pod="default/test-pod-1" Jan 23 23:58:21.571556 kernel: FS-Cache: Loaded Jan 23 23:58:21.615603 kernel: RPC: Registered named UNIX socket transport module. Jan 23 23:58:21.615735 kernel: RPC: Registered udp transport module. Jan 23 23:58:21.615774 kernel: RPC: Registered tcp transport module. Jan 23 23:58:21.616585 kernel: RPC: Registered tcp-with-tls transport module. Jan 23 23:58:21.617691 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 23 23:58:21.944528 kernel: NFS: Registering the id_resolver key type Jan 23 23:58:21.944660 kernel: Key type id_resolver registered Jan 23 23:58:21.944702 kernel: Key type id_legacy registered Jan 23 23:58:21.981905 nfsidmap[4171]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 23 23:58:21.987964 nfsidmap[4172]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 23 23:58:22.182096 kubelet[2471]: E0123 23:58:22.182035 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:22.238134 containerd[2025]: time="2026-01-23T23:58:22.237324085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4db3e3c7-3d28-4911-9edb-ff21bc86d762,Namespace:default,Attempt:0,}" Jan 23 23:58:22.527328 (udev-worker)[4160]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:58:22.531009 systemd-networkd[1943]: cali5ec59c6bf6e: Link UP Jan 23 23:58:22.532313 systemd-networkd[1943]: cali5ec59c6bf6e: Gained carrier Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.335 [INFO][4174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.24-k8s-test--pod--1-eth0 default 4db3e3c7-3d28-4911-9edb-ff21bc86d762 1543 0 2026-01-23 23:57:55 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.24 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.335 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.389 [INFO][4187] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" HandleID="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Workload="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.389 [INFO][4187] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" HandleID="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Workload="172.31.22.24-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.24", "pod":"test-pod-1", "timestamp":"2026-01-23 23:58:22.38936435 +0000 UTC"}, Hostname:"172.31.22.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.389 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.389 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.389 [INFO][4187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.24' Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.419 [INFO][4187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.437 [INFO][4187] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.446 [INFO][4187] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.451 [INFO][4187] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.456 [INFO][4187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.456 [INFO][4187] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.459 [INFO][4187] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1 Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.485 [INFO][4187] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.520 [INFO][4187] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.132/26] block=192.168.127.128/26 handle="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.520 [INFO][4187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.132/26] handle="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" host="172.31.22.24" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.520 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.520 [INFO][4187] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.132/26] IPv6=[] ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" HandleID="k8s-pod-network.106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Workload="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.570758 containerd[2025]: 2026-01-23 23:58:22.523 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4db3e3c7-3d28-4911-9edb-ff21bc86d762", ResourceVersion:"1543", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.573111 containerd[2025]: 2026-01-23 23:58:22.523 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.132/32] ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.573111 containerd[2025]: 2026-01-23 23:58:22.523 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.573111 containerd[2025]: 2026-01-23 23:58:22.530 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.573111 containerd[2025]: 2026-01-23 23:58:22.531 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.24-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4db3e3c7-3d28-4911-9edb-ff21bc86d762", ResourceVersion:"1543", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.24", ContainerID:"106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"fe:a4:3d:4b:3e:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.573111 containerd[2025]: 2026-01-23 23:58:22.567 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.24-k8s-test--pod--1-eth0" Jan 23 23:58:22.611986 containerd[2025]: time="2026-01-23T23:58:22.610769151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:58:22.611986 containerd[2025]: time="2026-01-23T23:58:22.610901499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:58:22.611986 containerd[2025]: time="2026-01-23T23:58:22.610940199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:22.611986 containerd[2025]: time="2026-01-23T23:58:22.611720283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:58:22.662912 systemd[1]: run-containerd-runc-k8s.io-106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1-runc.s6TAna.mount: Deactivated successfully. Jan 23 23:58:22.677761 systemd[1]: Started cri-containerd-106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1.scope - libcontainer container 106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1. Jan 23 23:58:22.741569 containerd[2025]: time="2026-01-23T23:58:22.741512595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4db3e3c7-3d28-4911-9edb-ff21bc86d762,Namespace:default,Attempt:0,} returns sandbox id \"106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1\"" Jan 23 23:58:22.744311 containerd[2025]: time="2026-01-23T23:58:22.744243027Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 23 23:58:23.036152 containerd[2025]: time="2026-01-23T23:58:23.036072493Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:58:23.038162 containerd[2025]: time="2026-01-23T23:58:23.038083693Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 23 23:58:23.044034 containerd[2025]: time="2026-01-23T23:58:23.043958233Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:32c5137cb8c7cf61e75836f150e983b9be21fecc642ada89fd936c8cd6c0faa0\", size \"62404521\" in 299.652398ms" Jan 23 23:58:23.044034 containerd[2025]: time="2026-01-23T23:58:23.044024377Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3e4ccf401ba9f89a59873e31faf9ee80cc24b5cb6b8dc15c4e5393551cdaeb58\"" Jan 23 23:58:23.053462 containerd[2025]: time="2026-01-23T23:58:23.053227069Z" level=info msg="CreateContainer within sandbox \"106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 23 23:58:23.078516 containerd[2025]: time="2026-01-23T23:58:23.078291325Z" level=info msg="CreateContainer within sandbox \"106de016663568b7d036f70440c06558f25040d2f319a1955e68a4724c5d9bf1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"def1e948e19df170f4ad1c3dafe7e29c13731271404156ef7108b4f448523f61\"" Jan 23 23:58:23.079245 containerd[2025]: time="2026-01-23T23:58:23.079200109Z" level=info msg="StartContainer for \"def1e948e19df170f4ad1c3dafe7e29c13731271404156ef7108b4f448523f61\"" Jan 23 23:58:23.122794 systemd[1]: Started cri-containerd-def1e948e19df170f4ad1c3dafe7e29c13731271404156ef7108b4f448523f61.scope - libcontainer container def1e948e19df170f4ad1c3dafe7e29c13731271404156ef7108b4f448523f61. Jan 23 23:58:23.169556 containerd[2025]: time="2026-01-23T23:58:23.169487101Z" level=info msg="StartContainer for \"def1e948e19df170f4ad1c3dafe7e29c13731271404156ef7108b4f448523f61\" returns successfully" Jan 23 23:58:23.182980 kubelet[2471]: E0123 23:58:23.182913 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:23.881834 systemd-networkd[1943]: cali5ec59c6bf6e: Gained IPv6LL Jan 23 23:58:24.183703 kubelet[2471]: E0123 23:58:24.183560 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:25.184793 kubelet[2471]: E0123 23:58:25.184729 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:26.044824 ntpd[1993]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:58:26.045448 ntpd[1993]: 23 Jan 23:58:26 ntpd[1993]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:58:26.185919 kubelet[2471]: E0123 23:58:26.185858 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:27.186567 kubelet[2471]: E0123 23:58:27.186492 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:28.186893 kubelet[2471]: E0123 23:58:28.186830 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:28.287359 kubelet[2471]: E0123 23:58:28.287277 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:58:28.311648 kubelet[2471]: I0123 23:58:28.311379 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.009703109 podStartE2EDuration="33.311357299s" podCreationTimestamp="2026-01-23 23:57:55 +0000 UTC" firstStartedPulling="2026-01-23 23:58:22.743713299 +0000 UTC m=+75.282328935" lastFinishedPulling="2026-01-23 23:58:23.045367489 +0000 UTC m=+75.583983125" observedRunningTime="2026-01-23 23:58:23.547027455 +0000 UTC m=+76.085643091" watchObservedRunningTime="2026-01-23 23:58:28.311357299 +0000 UTC m=+80.849972935" Jan 23 23:58:29.114183 kubelet[2471]: E0123 23:58:29.114118 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:29.187199 kubelet[2471]: E0123 23:58:29.187152 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:30.187353 kubelet[2471]: E0123 23:58:30.187292 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:31.189348 kubelet[2471]: E0123 23:58:31.189266 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:32.189935 kubelet[2471]: E0123 23:58:32.189864 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:33.190842 kubelet[2471]: E0123 23:58:33.190779 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:34.191557 kubelet[2471]: E0123 23:58:34.191502 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:35.191880 kubelet[2471]: E0123 23:58:35.191817 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:36.193040 kubelet[2471]: E0123 23:58:36.192974 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:37.193477 kubelet[2471]: E0123 23:58:37.193375 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:38.193556 kubelet[2471]: E0123 23:58:38.193508 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:39.194227 kubelet[2471]: E0123 23:58:39.194168 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:39.287905 kubelet[2471]: E0123 23:58:39.287528 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:58:40.195237 kubelet[2471]: E0123 23:58:40.195167 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:41.195958 kubelet[2471]: E0123 23:58:41.195894 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:42.197076 kubelet[2471]: E0123 23:58:42.197016 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:43.198198 kubelet[2471]: E0123 23:58:43.198131 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:44.198553 kubelet[2471]: E0123 23:58:44.198490 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:45.198944 kubelet[2471]: E0123 23:58:45.198880 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:46.199514 kubelet[2471]: E0123 23:58:46.199455 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:47.200649 kubelet[2471]: E0123 23:58:47.200590 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:48.201672 kubelet[2471]: E0123 23:58:48.201608 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:49.113943 kubelet[2471]: E0123 23:58:49.113878 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:49.201960 kubelet[2471]: E0123 23:58:49.201913 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:50.202056 kubelet[2471]: E0123 23:58:50.201998 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:50.594970 kubelet[2471]: E0123 23:58:50.594721 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": context deadline exceeded" Jan 23 23:58:51.203068 kubelet[2471]: E0123 23:58:51.203003 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:51.288476 kubelet[2471]: E0123 23:58:51.288346 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:58:52.203712 kubelet[2471]: E0123 23:58:52.203645 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:53.204431 kubelet[2471]: E0123 23:58:53.204362 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:54.204849 kubelet[2471]: E0123 23:58:54.204785 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:55.205770 kubelet[2471]: E0123 23:58:55.205706 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:56.206758 kubelet[2471]: E0123 23:58:56.206692 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:57.207192 kubelet[2471]: E0123 23:58:57.207125 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:58.207824 kubelet[2471]: E0123 23:58:58.207744 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:58:59.208927 kubelet[2471]: E0123 23:58:59.208859 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:00.209301 kubelet[2471]: E0123 23:59:00.209227 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:00.595811 kubelet[2471]: E0123 23:59:00.595653 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": context deadline exceeded" Jan 23 23:59:01.209480 kubelet[2471]: E0123 23:59:01.209392 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:02.209955 kubelet[2471]: E0123 23:59:02.209882 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:03.210508 kubelet[2471]: E0123 23:59:03.210447 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:03.289910 containerd[2025]: time="2026-01-23T23:59:03.289853321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:59:03.549736 containerd[2025]: time="2026-01-23T23:59:03.549576594Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:03.551796 containerd[2025]: time="2026-01-23T23:59:03.551733594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:59:03.551932 containerd[2025]: time="2026-01-23T23:59:03.551876730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:59:03.552144 kubelet[2471]: E0123 23:59:03.552092 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:03.552226 kubelet[2471]: E0123 23:59:03.552157 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:59:03.552434 kubelet[2471]: E0123 23:59:03.552331 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:03.555487 containerd[2025]: time="2026-01-23T23:59:03.555144414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:59:03.784781 containerd[2025]: time="2026-01-23T23:59:03.784712803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:59:03.786910 containerd[2025]: time="2026-01-23T23:59:03.786842863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:59:03.787035 containerd[2025]: time="2026-01-23T23:59:03.786973999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:59:03.787214 kubelet[2471]: E0123 23:59:03.787150 2471 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:03.787342 kubelet[2471]: E0123 23:59:03.787212 2471 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:59:03.787505 kubelet[2471]: E0123 23:59:03.787399 2471 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnx8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-49pfd_calico-system(77b991bb-bce8-4211-845f-aa451168631a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:59:03.788993 kubelet[2471]: E0123 23:59:03.788904 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:59:04.211192 kubelet[2471]: E0123 23:59:04.211123 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:05.212053 kubelet[2471]: E0123 23:59:05.211994 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:06.212476 kubelet[2471]: E0123 23:59:06.212383 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:07.212919 kubelet[2471]: E0123 23:59:07.212843 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:08.213072 kubelet[2471]: E0123 23:59:08.212996 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:09.114489 kubelet[2471]: E0123 23:59:09.114411 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:09.213939 kubelet[2471]: E0123 23:59:09.213879 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:10.214496 kubelet[2471]: E0123 23:59:10.214394 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:10.596795 kubelet[2471]: E0123 23:59:10.596392 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 23:59:11.214970 kubelet[2471]: E0123 23:59:11.214895 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:12.215608 kubelet[2471]: E0123 23:59:12.215544 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:13.216575 kubelet[2471]: E0123 23:59:13.216505 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:13.289471 kubelet[2471]: E0123 23:59:13.289280 2471 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{csi-node-driver-49pfd.188d8189c7ce5d32 calico-system 1574 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-49pfd,UID:77b991bb-bce8-4211-845f-aa451168631a,APIVersion:v1,ResourceVersion:941,FieldPath:spec.containers{calico-csi},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/csi:v3.30.4\",Source:EventSource{Component:kubelet,Host:172.31.22.24,},FirstTimestamp:2026-01-23 23:57:37 +0000 UTC,LastTimestamp:2026-01-23 23:58:39.286449078 +0000 UTC m=+91.825064714,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.24,}" Jan 23 23:59:14.216976 kubelet[2471]: E0123 23:59:14.216914 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:15.176850 kubelet[2471]: E0123 23:59:15.175752 2471 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://172.31.28.204:6443/api/v1/namespaces/calico-system/events/csi-node-driver-49pfd.188d8189c7cf4282\": unexpected EOF" event="&Event{ObjectMeta:{csi-node-driver-49pfd.188d8189c7cf4282 calico-system 1575 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:csi-node-driver-49pfd,UID:77b991bb-bce8-4211-845f-aa451168631a,APIVersion:v1,ResourceVersion:941,FieldPath:spec.containers{calico-csi},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:172.31.22.24,},FirstTimestamp:2026-01-23 23:57:37 +0000 UTC,LastTimestamp:2026-01-23 23:58:39.286479378 +0000 UTC m=+91.825095026,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.24,}" Jan 23 23:59:15.180297 kubelet[2471]: E0123 23:59:15.179860 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": unexpected EOF" Jan 23 23:59:15.192388 kubelet[2471]: E0123 23:59:15.192201 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": read tcp 172.31.22.24:37562->172.31.28.204:6443: read: connection reset by peer" Jan 23 23:59:15.192388 kubelet[2471]: I0123 23:59:15.192273 2471 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 23:59:15.195514 kubelet[2471]: E0123 23:59:15.194660 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="200ms" Jan 23 23:59:15.217945 kubelet[2471]: E0123 23:59:15.217878 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:15.288006 kubelet[2471]: E0123 23:59:15.287922 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:59:15.395593 kubelet[2471]: E0123 23:59:15.395530 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="400ms" Jan 23 23:59:15.797288 kubelet[2471]: E0123 23:59:15.797227 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="800ms" Jan 23 23:59:16.175018 kubelet[2471]: I0123 23:59:16.174803 2471 status_manager.go:895] "Failed to get status for pod" podUID="77b991bb-bce8-4211-845f-aa451168631a" pod="calico-system/csi-node-driver-49pfd" err="Get \"https://172.31.28.204:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49pfd\": dial tcp 172.31.28.204:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 23 23:59:16.177098 kubelet[2471]: I0123 23:59:16.176630 2471 status_manager.go:895] "Failed to get status for pod" podUID="77b991bb-bce8-4211-845f-aa451168631a" pod="calico-system/csi-node-driver-49pfd" err="Get \"https://172.31.28.204:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49pfd\": dial tcp 172.31.28.204:6443: connect: connection refused" Jan 23 23:59:16.178062 kubelet[2471]: I0123 23:59:16.177997 2471 status_manager.go:895] "Failed to get status for pod" podUID="77b991bb-bce8-4211-845f-aa451168631a" pod="calico-system/csi-node-driver-49pfd" err="Get \"https://172.31.28.204:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-49pfd\": dial tcp 172.31.28.204:6443: connect: connection refused" Jan 23 23:59:16.218638 kubelet[2471]: E0123 23:59:16.218530 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:17.219829 kubelet[2471]: E0123 23:59:17.219763 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:18.220083 kubelet[2471]: E0123 23:59:18.220020 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:19.220411 kubelet[2471]: E0123 23:59:19.220345 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:20.221058 kubelet[2471]: E0123 23:59:20.220985 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:21.222106 kubelet[2471]: E0123 23:59:21.222044 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:22.222531 kubelet[2471]: E0123 23:59:22.222474 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:23.223546 kubelet[2471]: E0123 23:59:23.223477 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:24.223667 kubelet[2471]: E0123 23:59:24.223608 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:25.224433 kubelet[2471]: E0123 23:59:25.224367 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:26.225362 kubelet[2471]: E0123 23:59:26.225299 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:26.598670 kubelet[2471]: E0123 23:59:26.598520 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": context deadline exceeded" interval="1.6s" Jan 23 23:59:27.225973 kubelet[2471]: E0123 23:59:27.225909 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:28.227099 kubelet[2471]: E0123 23:59:28.227031 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:29.114287 kubelet[2471]: E0123 23:59:29.114223 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:29.227343 kubelet[2471]: E0123 23:59:29.227300 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:30.228346 kubelet[2471]: E0123 23:59:30.228287 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:30.286888 kubelet[2471]: E0123 23:59:30.286818 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-49pfd" podUID="77b991bb-bce8-4211-845f-aa451168631a" Jan 23 23:59:31.228520 kubelet[2471]: E0123 23:59:31.228478 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:32.229650 kubelet[2471]: E0123 23:59:32.229595 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:33.230727 kubelet[2471]: E0123 23:59:33.230649 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:34.230887 kubelet[2471]: E0123 23:59:34.230831 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:35.231282 kubelet[2471]: E0123 23:59:35.231226 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:36.232013 kubelet[2471]: E0123 23:59:36.231951 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:37.232704 kubelet[2471]: E0123 23:59:37.232640 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 23 23:59:38.200918 kubelet[2471]: E0123 23:59:38.200844 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.24?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 23 23:59:38.233015 kubelet[2471]: E0123 23:59:38.232972 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"