Sep 12 23:54:37.257197 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 23:54:37.257243 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:54:37.257268 kernel: KASLR disabled due to lack of seed Sep 12 23:54:37.257285 kernel: efi: EFI v2.7 by EDK II Sep 12 23:54:37.257301 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 23:54:37.257365 kernel: ACPI: Early table checksum verification disabled Sep 12 23:54:37.257384 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 23:54:37.257401 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 23:54:37.257417 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 23:54:37.257434 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 23:54:37.257458 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 23:54:37.257474 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 23:54:37.257490 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 23:54:37.257505 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 23:54:37.257524 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 23:54:37.257545 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 23:54:37.257563 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 23:54:37.257580 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 23:54:37.257597 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 23:54:37.257613 kernel: printk: bootconsole [uart0] enabled Sep 12 23:54:37.257630 kernel: NUMA: Failed to initialise from firmware Sep 12 23:54:37.257647 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:54:37.257664 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 23:54:37.257680 kernel: Zone ranges: Sep 12 23:54:37.257697 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:54:37.257714 kernel: DMA32 empty Sep 12 23:54:37.257734 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 23:54:37.257751 kernel: Movable zone start for each node Sep 12 23:54:37.257768 kernel: Early memory node ranges Sep 12 23:54:37.257784 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 23:54:37.257801 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 23:54:37.257818 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 23:54:37.257834 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 23:54:37.257851 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 23:54:37.257867 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 23:54:37.257884 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 23:54:37.257900 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 23:54:37.257918 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:54:37.257940 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 23:54:37.257957 kernel: psci: probing for conduit method from ACPI. Sep 12 23:54:37.257981 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 23:54:37.257999 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:54:37.258017 kernel: psci: Trusted OS migration not required Sep 12 23:54:37.258038 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:54:37.258056 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 23:54:37.258074 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:54:37.258091 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:54:37.258109 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:54:37.258126 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:54:37.258143 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:54:37.258160 kernel: CPU features: detected: Spectre-v2 Sep 12 23:54:37.258178 kernel: CPU features: detected: Spectre-v3a Sep 12 23:54:37.258196 kernel: CPU features: detected: Spectre-BHB Sep 12 23:54:37.258213 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 23:54:37.258236 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 23:54:37.258254 kernel: alternatives: applying boot alternatives Sep 12 23:54:37.258273 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:54:37.258292 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:54:37.260453 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:54:37.260494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:54:37.260513 kernel: Fallback order for Node 0: 0 Sep 12 23:54:37.260537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 23:54:37.260555 kernel: Policy zone: Normal Sep 12 23:54:37.260573 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:54:37.260591 kernel: software IO TLB: area num 2. Sep 12 23:54:37.260621 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 23:54:37.260642 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 23:54:37.260661 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:54:37.260680 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:54:37.260699 kernel: rcu: RCU event tracing is enabled. Sep 12 23:54:37.260717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:54:37.260735 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:54:37.260754 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:54:37.260773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:54:37.260790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:54:37.260808 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:54:37.260831 kernel: GICv3: 96 SPIs implemented Sep 12 23:54:37.260849 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:54:37.260866 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:54:37.260883 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 23:54:37.260900 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 23:54:37.260917 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 23:54:37.260935 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:54:37.260953 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:54:37.260970 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 23:54:37.260988 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 23:54:37.261005 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 23:54:37.261023 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:54:37.261045 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 23:54:37.261063 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 23:54:37.261080 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 23:54:37.261098 kernel: Console: colour dummy device 80x25 Sep 12 23:54:37.261116 kernel: printk: console [tty1] enabled Sep 12 23:54:37.261134 kernel: ACPI: Core revision 20230628 Sep 12 23:54:37.261152 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 23:54:37.261170 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:54:37.261188 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:54:37.261210 kernel: landlock: Up and running. Sep 12 23:54:37.261228 kernel: SELinux: Initializing. Sep 12 23:54:37.261246 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:54:37.261264 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:54:37.261282 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:54:37.261300 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:54:37.262400 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:54:37.262422 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:54:37.262440 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 23:54:37.262467 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 23:54:37.262485 kernel: Remapping and enabling EFI services. Sep 12 23:54:37.262503 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:54:37.262520 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:54:37.262539 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 23:54:37.262556 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 23:54:37.262574 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 23:54:37.262592 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:54:37.262609 kernel: SMP: Total of 2 processors activated. Sep 12 23:54:37.262627 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:54:37.262649 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 23:54:37.262667 kernel: CPU features: detected: CRC32 instructions Sep 12 23:54:37.262697 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:54:37.262720 kernel: alternatives: applying system-wide alternatives Sep 12 23:54:37.262738 kernel: devtmpfs: initialized Sep 12 23:54:37.262757 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:54:37.262776 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:54:37.262794 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:54:37.262813 kernel: SMBIOS 3.0.0 present. Sep 12 23:54:37.262836 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 23:54:37.262854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:54:37.262873 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:54:37.262892 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:54:37.262910 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:54:37.262929 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:54:37.262947 kernel: audit: type=2000 audit(0.291:1): state=initialized audit_enabled=0 res=1 Sep 12 23:54:37.262970 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:54:37.262989 kernel: cpuidle: using governor menu Sep 12 23:54:37.263008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:54:37.263026 kernel: ASID allocator initialised with 65536 entries Sep 12 23:54:37.263045 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:54:37.263064 kernel: Serial: AMBA PL011 UART driver Sep 12 23:54:37.263083 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 23:54:37.263102 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:54:37.263121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:54:37.263145 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:54:37.263165 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:54:37.263183 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:54:37.263202 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:54:37.263221 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:54:37.263239 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:54:37.263258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:54:37.263277 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:54:37.263295 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:54:37.264415 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:54:37.264448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:54:37.264469 kernel: ACPI: Interpreter enabled Sep 12 23:54:37.264490 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:54:37.264510 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:54:37.264532 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 23:54:37.264888 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:54:37.265111 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:54:37.265364 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:54:37.275651 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 23:54:37.275878 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 23:54:37.275905 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 23:54:37.275925 kernel: acpiphp: Slot [1] registered Sep 12 23:54:37.275944 kernel: acpiphp: Slot [2] registered Sep 12 23:54:37.275963 kernel: acpiphp: Slot [3] registered Sep 12 23:54:37.275981 kernel: acpiphp: Slot [4] registered Sep 12 23:54:37.276012 kernel: acpiphp: Slot [5] registered Sep 12 23:54:37.276032 kernel: acpiphp: Slot [6] registered Sep 12 23:54:37.276050 kernel: acpiphp: Slot [7] registered Sep 12 23:54:37.276069 kernel: acpiphp: Slot [8] registered Sep 12 23:54:37.276087 kernel: acpiphp: Slot [9] registered Sep 12 23:54:37.276105 kernel: acpiphp: Slot [10] registered Sep 12 23:54:37.276124 kernel: acpiphp: Slot [11] registered Sep 12 23:54:37.276142 kernel: acpiphp: Slot [12] registered Sep 12 23:54:37.276161 kernel: acpiphp: Slot [13] registered Sep 12 23:54:37.276179 kernel: acpiphp: Slot [14] registered Sep 12 23:54:37.276203 kernel: acpiphp: Slot [15] registered Sep 12 23:54:37.276221 kernel: acpiphp: Slot [16] registered Sep 12 23:54:37.276239 kernel: acpiphp: Slot [17] registered Sep 12 23:54:37.276257 kernel: acpiphp: Slot [18] registered Sep 12 23:54:37.276276 kernel: acpiphp: Slot [19] registered Sep 12 23:54:37.276294 kernel: acpiphp: Slot [20] registered Sep 12 23:54:37.276336 kernel: acpiphp: Slot [21] registered Sep 12 23:54:37.276357 kernel: acpiphp: Slot [22] registered Sep 12 23:54:37.276376 kernel: acpiphp: Slot [23] registered Sep 12 23:54:37.276401 kernel: acpiphp: Slot [24] registered Sep 12 23:54:37.276420 kernel: acpiphp: Slot [25] registered Sep 12 23:54:37.276438 kernel: acpiphp: Slot [26] registered Sep 12 23:54:37.276456 kernel: acpiphp: Slot [27] registered Sep 12 23:54:37.276474 kernel: acpiphp: Slot [28] registered Sep 12 23:54:37.276493 kernel: acpiphp: Slot [29] registered Sep 12 23:54:37.276511 kernel: acpiphp: Slot [30] registered Sep 12 23:54:37.276529 kernel: acpiphp: Slot [31] registered Sep 12 23:54:37.276547 kernel: PCI host bridge to bus 0000:00 Sep 12 23:54:37.276753 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 23:54:37.276940 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:54:37.277123 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 23:54:37.277329 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 23:54:37.277580 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 23:54:37.277813 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 23:54:37.278031 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 23:54:37.278258 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 23:54:37.278495 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 23:54:37.278705 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:54:37.278928 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 23:54:37.279136 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 23:54:37.281389 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 23:54:37.281612 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 23:54:37.281826 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:54:37.282034 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 23:54:37.282242 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 23:54:37.282497 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 23:54:37.282714 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 23:54:37.282941 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 23:54:37.283150 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 23:54:37.284866 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:54:37.285099 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 23:54:37.285130 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:54:37.285151 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:54:37.285172 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:54:37.285192 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:54:37.285212 kernel: iommu: Default domain type: Translated Sep 12 23:54:37.285232 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:54:37.285260 kernel: efivars: Registered efivars operations Sep 12 23:54:37.285279 kernel: vgaarb: loaded Sep 12 23:54:37.285298 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:54:37.285345 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:54:37.285365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:54:37.285384 kernel: pnp: PnP ACPI init Sep 12 23:54:37.285621 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 23:54:37.285650 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:54:37.285676 kernel: NET: Registered PF_INET protocol family Sep 12 23:54:37.285696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:54:37.285715 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:54:37.285734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:54:37.285752 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:54:37.285771 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:54:37.285790 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:54:37.285809 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:54:37.285828 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:54:37.285852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:54:37.285870 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:54:37.285890 kernel: kvm [1]: HYP mode not available Sep 12 23:54:37.285908 kernel: Initialise system trusted keyrings Sep 12 23:54:37.285927 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:54:37.285946 kernel: Key type asymmetric registered Sep 12 23:54:37.285964 kernel: Asymmetric key parser 'x509' registered Sep 12 23:54:37.285983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:54:37.286002 kernel: io scheduler mq-deadline registered Sep 12 23:54:37.286025 kernel: io scheduler kyber registered Sep 12 23:54:37.286044 kernel: io scheduler bfq registered Sep 12 23:54:37.286279 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 23:54:37.286707 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:54:37.286734 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:54:37.286754 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 23:54:37.286774 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 23:54:37.286792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:54:37.286820 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:54:37.287064 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 23:54:37.287091 kernel: printk: console [ttyS0] disabled Sep 12 23:54:37.287110 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 23:54:37.287129 kernel: printk: console [ttyS0] enabled Sep 12 23:54:37.287148 kernel: printk: bootconsole [uart0] disabled Sep 12 23:54:37.287167 kernel: thunder_xcv, ver 1.0 Sep 12 23:54:37.287185 kernel: thunder_bgx, ver 1.0 Sep 12 23:54:37.287203 kernel: nicpf, ver 1.0 Sep 12 23:54:37.287228 kernel: nicvf, ver 1.0 Sep 12 23:54:37.288533 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:54:37.288757 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:54:36 UTC (1757721276) Sep 12 23:54:37.288784 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:54:37.288804 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 23:54:37.288823 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:54:37.288842 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:54:37.288861 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:54:37.288889 kernel: Segment Routing with IPv6 Sep 12 23:54:37.288908 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:54:37.288927 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:54:37.288945 kernel: Key type dns_resolver registered Sep 12 23:54:37.288964 kernel: registered taskstats version 1 Sep 12 23:54:37.288983 kernel: Loading compiled-in X.509 certificates Sep 12 23:54:37.289002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:54:37.289021 kernel: Key type .fscrypt registered Sep 12 23:54:37.289039 kernel: Key type fscrypt-provisioning registered Sep 12 23:54:37.289063 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:54:37.289082 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:54:37.289101 kernel: ima: No architecture policies found Sep 12 23:54:37.289119 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:54:37.289138 kernel: clk: Disabling unused clocks Sep 12 23:54:37.289157 kernel: Freeing unused kernel memory: 39488K Sep 12 23:54:37.289175 kernel: Run /init as init process Sep 12 23:54:37.289193 kernel: with arguments: Sep 12 23:54:37.289212 kernel: /init Sep 12 23:54:37.289230 kernel: with environment: Sep 12 23:54:37.289253 kernel: HOME=/ Sep 12 23:54:37.289271 kernel: TERM=linux Sep 12 23:54:37.289290 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:54:37.289411 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:54:37.289440 systemd[1]: Detected virtualization amazon. Sep 12 23:54:37.289462 systemd[1]: Detected architecture arm64. Sep 12 23:54:37.289481 systemd[1]: Running in initrd. Sep 12 23:54:37.289508 systemd[1]: No hostname configured, using default hostname. Sep 12 23:54:37.289528 systemd[1]: Hostname set to . Sep 12 23:54:37.289550 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:54:37.289570 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:54:37.289590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:54:37.289611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:54:37.289632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:54:37.289654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:54:37.289680 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:54:37.289701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:54:37.289725 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:54:37.289746 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:54:37.289766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:54:37.289787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:54:37.289808 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:54:37.289833 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:54:37.289854 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:54:37.289874 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:54:37.289895 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:54:37.289916 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:54:37.289936 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:54:37.289957 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:54:37.289978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:54:37.289998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:54:37.290024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:54:37.290044 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:54:37.290065 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:54:37.290087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:54:37.290107 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:54:37.290128 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:54:37.290149 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:54:37.290170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:54:37.290196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:37.290218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:54:37.290238 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:54:37.290331 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 23:54:37.290389 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:54:37.290414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:54:37.290434 kernel: Bridge firewalling registered Sep 12 23:54:37.290455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:54:37.290483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:54:37.290505 systemd-journald[251]: Journal started Sep 12 23:54:37.290545 systemd-journald[251]: Runtime Journal (/run/log/journal/ec29d21cf91bdde223371717b63c7d51) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:54:37.220059 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 23:54:37.257407 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 23:54:37.306226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:54:37.311182 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:54:37.322387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:37.332296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:54:37.334545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:37.350661 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:37.360198 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:54:37.370720 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:54:37.407029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:54:37.421423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:54:37.426033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:37.439688 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:54:37.451631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:54:37.469388 dracut-cmdline[286]: dracut-dracut-053 Sep 12 23:54:37.474429 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:54:37.559049 systemd-resolved[287]: Positive Trust Anchors: Sep 12 23:54:37.559078 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:54:37.559141 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:54:37.630143 kernel: SCSI subsystem initialized Sep 12 23:54:37.637432 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:54:37.650786 kernel: iscsi: registered transport (tcp) Sep 12 23:54:37.673097 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:54:37.673173 kernel: QLogic iSCSI HBA Driver Sep 12 23:54:37.774353 kernel: random: crng init done Sep 12 23:54:37.774829 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 23:54:37.779202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:54:37.787872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:54:37.809018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:54:37.821608 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:54:37.858368 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:54:37.858446 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:54:37.858474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:54:37.928372 kernel: raid6: neonx8 gen() 6675 MB/s Sep 12 23:54:37.945360 kernel: raid6: neonx4 gen() 6477 MB/s Sep 12 23:54:37.962360 kernel: raid6: neonx2 gen() 5381 MB/s Sep 12 23:54:37.979364 kernel: raid6: neonx1 gen() 3906 MB/s Sep 12 23:54:37.996361 kernel: raid6: int64x8 gen() 3763 MB/s Sep 12 23:54:38.013380 kernel: raid6: int64x4 gen() 3637 MB/s Sep 12 23:54:38.030354 kernel: raid6: int64x2 gen() 3534 MB/s Sep 12 23:54:38.048277 kernel: raid6: int64x1 gen() 2767 MB/s Sep 12 23:54:38.048335 kernel: raid6: using algorithm neonx8 gen() 6675 MB/s Sep 12 23:54:38.066257 kernel: raid6: .... xor() 4842 MB/s, rmw enabled Sep 12 23:54:38.066299 kernel: raid6: using neon recovery algorithm Sep 12 23:54:38.074346 kernel: xor: measuring software checksum speed Sep 12 23:54:38.074418 kernel: 8regs : 10251 MB/sec Sep 12 23:54:38.077669 kernel: 32regs : 10621 MB/sec Sep 12 23:54:38.077706 kernel: arm64_neon : 9564 MB/sec Sep 12 23:54:38.077732 kernel: xor: using function: 32regs (10621 MB/sec) Sep 12 23:54:38.162373 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:54:38.181027 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:54:38.193687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:54:38.236459 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 12 23:54:38.245907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:54:38.257616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:54:38.299610 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 12 23:54:38.356372 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:54:38.367800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:54:38.486272 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:54:38.504208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:54:38.562288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:54:38.565136 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:54:38.573399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:54:38.578287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:54:38.597674 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:54:38.643692 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:54:38.683660 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:54:38.683724 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 23:54:38.696539 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 23:54:38.696878 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 23:54:38.705356 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7f:34:62:84:2d Sep 12 23:54:38.714019 (udev-worker)[513]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:38.726474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:54:38.731578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:38.739865 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:38.742283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:54:38.747834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:38.756499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:38.769361 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:54:38.770781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:38.781223 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 23:54:38.787334 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 23:54:38.801336 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:54:38.801402 kernel: GPT:9289727 != 16777215 Sep 12 23:54:38.801428 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:54:38.804161 kernel: GPT:9289727 != 16777215 Sep 12 23:54:38.804215 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:54:38.805200 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:38.808758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:38.818851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:54:38.869121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:38.900363 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (526) Sep 12 23:54:38.923749 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (518) Sep 12 23:54:38.961048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 23:54:39.041040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 23:54:39.058649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:54:39.073147 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 23:54:39.077523 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 23:54:39.095055 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:54:39.108901 disk-uuid[660]: Primary Header is updated. Sep 12 23:54:39.108901 disk-uuid[660]: Secondary Entries is updated. Sep 12 23:54:39.108901 disk-uuid[660]: Secondary Header is updated. Sep 12 23:54:39.120523 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:39.130358 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:39.140396 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:40.143387 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:54:40.147148 disk-uuid[661]: The operation has completed successfully. Sep 12 23:54:40.330238 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:54:40.332571 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:54:40.380602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:54:40.398969 sh[1004]: Success Sep 12 23:54:40.425591 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:54:40.537445 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:54:40.553522 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:54:40.562201 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:54:40.597520 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:54:40.597583 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:40.599458 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:54:40.599494 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:54:40.600779 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:54:40.638344 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:54:40.654745 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:54:40.655575 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:54:40.667713 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:54:40.674823 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:54:40.708987 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:40.709070 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:40.710630 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:40.732015 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:40.747161 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:54:40.755364 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:40.769788 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:54:40.781695 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:54:40.869123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:54:40.884608 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:54:40.956585 systemd-networkd[1197]: lo: Link UP Sep 12 23:54:40.957083 systemd-networkd[1197]: lo: Gained carrier Sep 12 23:54:40.963158 systemd-networkd[1197]: Enumeration completed Sep 12 23:54:40.965474 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:54:40.968205 systemd[1]: Reached target network.target - Network. Sep 12 23:54:40.968760 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:40.968767 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:54:40.979968 systemd-networkd[1197]: eth0: Link UP Sep 12 23:54:40.979976 systemd-networkd[1197]: eth0: Gained carrier Sep 12 23:54:40.979994 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:41.009441 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.25.8/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:54:41.041554 ignition[1125]: Ignition 2.19.0 Sep 12 23:54:41.042701 ignition[1125]: Stage: fetch-offline Sep 12 23:54:41.045163 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:41.045190 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:41.046887 ignition[1125]: Ignition finished successfully Sep 12 23:54:41.053769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:54:41.069619 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:54:41.104425 ignition[1208]: Ignition 2.19.0 Sep 12 23:54:41.104453 ignition[1208]: Stage: fetch Sep 12 23:54:41.109505 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:41.109531 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:41.109680 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:41.123701 ignition[1208]: PUT result: OK Sep 12 23:54:41.126524 ignition[1208]: parsed url from cmdline: "" Sep 12 23:54:41.126540 ignition[1208]: no config URL provided Sep 12 23:54:41.126557 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:54:41.127828 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:54:41.127866 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:41.135047 ignition[1208]: PUT result: OK Sep 12 23:54:41.135184 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 23:54:41.139069 ignition[1208]: GET result: OK Sep 12 23:54:41.139152 ignition[1208]: parsing config with SHA512: f10187337491d2597fdff0d35848fcc192791824910a5362285a2fee07d01a6989c81e14b4563a2556dc00e80a9b71517c1c6076f69b4928921d2d6a6b691832 Sep 12 23:54:41.146697 unknown[1208]: fetched base config from "system" Sep 12 23:54:41.147748 unknown[1208]: fetched base config from "system" Sep 12 23:54:41.148343 ignition[1208]: fetch: fetch complete Sep 12 23:54:41.147763 unknown[1208]: fetched user config from "aws" Sep 12 23:54:41.148356 ignition[1208]: fetch: fetch passed Sep 12 23:54:41.148439 ignition[1208]: Ignition finished successfully Sep 12 23:54:41.160401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:54:41.169643 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:54:41.203130 ignition[1214]: Ignition 2.19.0 Sep 12 23:54:41.204345 ignition[1214]: Stage: kargs Sep 12 23:54:41.205924 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:41.205951 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:41.206099 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:41.213453 ignition[1214]: PUT result: OK Sep 12 23:54:41.222227 ignition[1214]: kargs: kargs passed Sep 12 23:54:41.224335 ignition[1214]: Ignition finished successfully Sep 12 23:54:41.229220 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:54:41.240673 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:54:41.266106 ignition[1220]: Ignition 2.19.0 Sep 12 23:54:41.268150 ignition[1220]: Stage: disks Sep 12 23:54:41.268842 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:41.268868 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:41.269749 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:41.277034 ignition[1220]: PUT result: OK Sep 12 23:54:41.281118 ignition[1220]: disks: disks passed Sep 12 23:54:41.281221 ignition[1220]: Ignition finished successfully Sep 12 23:54:41.285378 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:54:41.286515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:54:41.289906 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:54:41.294225 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:54:41.298947 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:54:41.303406 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:54:41.317691 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:54:41.363898 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:54:41.368190 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:54:41.380627 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:54:41.464370 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:54:41.464775 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:54:41.473660 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:54:41.491477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:54:41.501567 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:54:41.504890 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:54:41.504970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:54:41.505016 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:54:41.526846 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:54:41.531337 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Sep 12 23:54:41.538056 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:41.538122 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:41.539497 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:41.541614 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:54:41.566512 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:41.569620 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:54:41.646706 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:54:41.656792 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:54:41.666127 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:54:41.676033 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:54:41.830766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:54:41.844604 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:54:41.855231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:54:41.872227 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:54:41.876343 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:41.913357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:54:41.918814 ignition[1360]: INFO : Ignition 2.19.0 Sep 12 23:54:41.918814 ignition[1360]: INFO : Stage: mount Sep 12 23:54:41.918814 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:41.918814 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:41.918814 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:41.931179 ignition[1360]: INFO : PUT result: OK Sep 12 23:54:41.936602 ignition[1360]: INFO : mount: mount passed Sep 12 23:54:41.938244 ignition[1360]: INFO : Ignition finished successfully Sep 12 23:54:41.942555 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:54:41.954632 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:54:41.968734 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:54:42.007417 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Sep 12 23:54:42.011266 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:54:42.011333 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:54:42.012683 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:54:42.017324 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:54:42.021044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:54:42.059990 ignition[1389]: INFO : Ignition 2.19.0 Sep 12 23:54:42.059990 ignition[1389]: INFO : Stage: files Sep 12 23:54:42.063756 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:42.063756 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:42.063756 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:42.063756 ignition[1389]: INFO : PUT result: OK Sep 12 23:54:42.074166 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:54:42.077627 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:54:42.077627 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:54:42.084454 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:54:42.087433 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:54:42.091536 unknown[1389]: wrote ssh authorized keys file for user: core Sep 12 23:54:42.093828 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:54:42.098541 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:54:42.103447 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:54:42.443560 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 12 23:54:42.502655 systemd-networkd[1197]: eth0: Gained IPv6LL Sep 12 23:54:42.814978 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:54:42.814978 ignition[1389]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:54:42.814978 ignition[1389]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:54:42.814978 ignition[1389]: INFO : files: files passed Sep 12 23:54:42.814978 ignition[1389]: INFO : Ignition finished successfully Sep 12 23:54:42.818702 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:54:42.844154 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:54:42.849704 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:54:42.866917 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:54:42.870048 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:54:42.885175 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:42.885175 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:42.892714 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:54:42.900027 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:54:42.905777 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:54:42.919677 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:54:42.969957 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:54:42.970871 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:54:42.978007 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:54:42.980477 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:54:42.982961 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:54:42.993719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:54:43.024843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:54:43.037605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:54:43.064000 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:54:43.065585 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:54:43.066270 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:54:43.069360 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:54:43.069618 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:54:43.087690 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:54:43.092546 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:54:43.093161 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:54:43.096825 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:54:43.101345 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:54:43.106300 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:54:43.111096 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:54:43.115697 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:54:43.121197 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:54:43.125738 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:54:43.130119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:54:43.130419 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:54:43.138276 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:54:43.142898 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:54:43.149099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:54:43.150208 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:54:43.154509 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:54:43.154799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:54:43.162843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:54:43.163238 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:54:43.167676 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:54:43.167917 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:54:43.186781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:54:43.196855 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:54:43.202991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:54:43.203386 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:54:43.217793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:54:43.218096 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:54:43.239736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:54:43.243952 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:54:43.262263 ignition[1441]: INFO : Ignition 2.19.0 Sep 12 23:54:43.262263 ignition[1441]: INFO : Stage: umount Sep 12 23:54:43.262263 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:54:43.262263 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:54:43.262263 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:54:43.276668 ignition[1441]: INFO : PUT result: OK Sep 12 23:54:43.280703 ignition[1441]: INFO : umount: umount passed Sep 12 23:54:43.280703 ignition[1441]: INFO : Ignition finished successfully Sep 12 23:54:43.284173 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:54:43.285849 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:54:43.294510 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:54:43.296634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:54:43.301357 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:54:43.301478 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:54:43.307678 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:54:43.307791 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:54:43.310514 systemd[1]: Stopped target network.target - Network. Sep 12 23:54:43.315539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:54:43.315663 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:54:43.322144 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:54:43.324972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:54:43.332058 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:54:43.332722 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:54:43.333058 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:54:43.333869 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:54:43.333959 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:54:43.334226 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:54:43.334292 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:54:43.351300 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:54:43.351972 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:54:43.355477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:54:43.356762 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:54:43.375769 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:54:43.380441 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:54:43.385714 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:54:43.388151 systemd-networkd[1197]: eth0: DHCPv6 lease lost Sep 12 23:54:43.389764 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:54:43.389948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:54:43.399673 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:54:43.399964 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:54:43.409269 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:54:43.410485 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:54:43.419505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:54:43.419623 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:54:43.424208 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:54:43.425969 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:54:43.444478 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:54:43.448019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:54:43.448135 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:54:43.458717 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:54:43.458829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:43.461192 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:54:43.461275 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:54:43.462857 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:54:43.462948 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:54:43.463351 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:54:43.503702 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:54:43.506097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:54:43.513691 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:54:43.513802 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:54:43.517740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:54:43.517817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:54:43.518272 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:54:43.522281 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:54:43.525966 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:54:43.526072 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:54:43.546378 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:54:43.546512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:54:43.559697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:54:43.563614 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:54:43.576566 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:54:43.581569 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:54:43.581682 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:54:43.587266 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:54:43.587460 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:54:43.596911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:54:43.597030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:43.604584 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:54:43.604769 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:54:43.607427 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:54:43.607602 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:54:43.617169 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:54:43.634003 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:54:43.652898 systemd[1]: Switching root. Sep 12 23:54:43.688472 systemd-journald[251]: Journal stopped Sep 12 23:54:45.660164 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 23:54:45.660380 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:54:45.660430 kernel: SELinux: policy capability open_perms=1 Sep 12 23:54:45.660463 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:54:45.660494 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:54:45.660525 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:54:45.660564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:54:45.660593 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:54:45.660624 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:54:45.660654 kernel: audit: type=1403 audit(1757721284.025:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:54:45.660696 systemd[1]: Successfully loaded SELinux policy in 51.022ms. Sep 12 23:54:45.660744 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.061ms. Sep 12 23:54:45.660782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:54:45.660814 systemd[1]: Detected virtualization amazon. Sep 12 23:54:45.660846 systemd[1]: Detected architecture arm64. Sep 12 23:54:45.660880 systemd[1]: Detected first boot. Sep 12 23:54:45.660916 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:54:45.660951 zram_generator::config[1483]: No configuration found. Sep 12 23:54:45.660990 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:54:45.661023 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:54:45.661060 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:54:45.661092 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:54:45.661123 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:54:45.661156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:54:45.661188 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:54:45.661221 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:54:45.661251 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:54:45.661284 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:54:45.661356 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:54:45.661393 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:54:45.661427 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:54:45.661458 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:54:45.661491 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:54:45.661563 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:54:45.661599 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:54:45.661633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:54:45.661665 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:54:45.661704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:54:45.661737 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:54:45.661771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:54:45.661804 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:54:45.661836 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:54:45.661866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:54:45.661898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:54:45.661927 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:54:45.661962 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:54:45.661992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:54:45.662023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:54:45.662054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:54:45.662084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:54:45.662117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:54:45.662149 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:54:45.662180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:54:45.662212 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:54:45.662248 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:54:45.662278 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:54:45.662343 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:54:45.662382 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:54:45.662418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:54:45.662449 systemd[1]: Reached target machines.target - Containers. Sep 12 23:54:45.662480 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:54:45.662515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:54:45.667533 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:54:45.667578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:54:45.667609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:54:45.667641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:54:45.667670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:54:45.667702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:54:45.667732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:54:45.667766 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:54:45.667798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:54:45.667834 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:54:45.667865 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:54:45.667899 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:54:45.667933 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:54:45.667964 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:54:45.667993 kernel: ACPI: bus type drm_connector registered Sep 12 23:54:45.668022 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:54:45.668053 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:54:45.668083 kernel: fuse: init (API version 7.39) Sep 12 23:54:45.668117 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:54:45.668150 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:54:45.668180 systemd[1]: Stopped verity-setup.service. Sep 12 23:54:45.668210 kernel: loop: module loaded Sep 12 23:54:45.668238 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:54:45.668268 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:54:45.668298 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:54:45.668377 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:54:45.668411 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:54:45.668449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:54:45.668482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:54:45.668512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:54:45.668542 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:54:45.668576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:54:45.668609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:54:45.668639 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:54:45.668669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:54:45.668705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:54:45.668746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:54:45.668776 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:54:45.668809 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:54:45.668906 systemd-journald[1568]: Collecting audit messages is disabled. Sep 12 23:54:45.668988 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:54:45.669021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:54:45.669053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:54:45.669084 systemd-journald[1568]: Journal started Sep 12 23:54:45.669136 systemd-journald[1568]: Runtime Journal (/run/log/journal/ec29d21cf91bdde223371717b63c7d51) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:54:45.026691 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:54:45.050123 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 23:54:45.050936 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:54:45.679211 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:54:45.682676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:54:45.687094 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:54:45.727424 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:54:45.735116 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:54:45.746580 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:54:45.760486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:54:45.764571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:54:45.764652 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:54:45.772135 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:54:45.787801 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:54:45.795729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:54:45.798886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:54:45.810957 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:54:45.832755 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:54:45.835529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:54:45.839538 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:54:45.844929 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:54:45.850674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:54:45.859718 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:54:45.871677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:54:45.878763 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:54:45.880152 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:54:45.880722 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:54:45.895554 systemd-journald[1568]: Time spent on flushing to /var/log/journal/ec29d21cf91bdde223371717b63c7d51 is 68.679ms for 892 entries. Sep 12 23:54:45.895554 systemd-journald[1568]: System Journal (/var/log/journal/ec29d21cf91bdde223371717b63c7d51) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:54:45.979298 systemd-journald[1568]: Received client request to flush runtime journal. Sep 12 23:54:45.959156 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:54:45.965498 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:54:45.982694 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:54:45.993207 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:54:46.019914 kernel: loop0: detected capacity change from 0 to 52536 Sep 12 23:54:46.036631 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Sep 12 23:54:46.039429 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Sep 12 23:54:46.070151 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:54:46.078234 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:54:46.083248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:54:46.092627 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:54:46.110595 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:54:46.144792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:54:46.161830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:54:46.179716 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:54:46.188363 kernel: loop1: detected capacity change from 0 to 114328 Sep 12 23:54:46.228210 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:54:46.238359 kernel: loop2: detected capacity change from 0 to 203944 Sep 12 23:54:46.250407 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:54:46.261793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:54:46.305466 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 12 23:54:46.306053 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 12 23:54:46.320415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:54:46.347178 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 23:54:46.402561 kernel: loop4: detected capacity change from 0 to 52536 Sep 12 23:54:46.424363 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 23:54:46.450448 kernel: loop6: detected capacity change from 0 to 203944 Sep 12 23:54:46.489357 kernel: loop7: detected capacity change from 0 to 114432 Sep 12 23:54:46.513256 (sd-merge)[1640]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 23:54:46.514296 (sd-merge)[1640]: Merged extensions into '/usr'. Sep 12 23:54:46.530994 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:54:46.531031 systemd[1]: Reloading... Sep 12 23:54:46.775358 zram_generator::config[1667]: No configuration found. Sep 12 23:54:46.814641 ldconfig[1607]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:54:47.068072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:54:47.191910 systemd[1]: Reloading finished in 659 ms. Sep 12 23:54:47.233419 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:54:47.240930 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:54:47.251275 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:54:47.262706 systemd[1]: Starting ensure-sysext.service... Sep 12 23:54:47.276759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:54:47.286682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:54:47.308642 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:54:47.308674 systemd[1]: Reloading... Sep 12 23:54:47.332430 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:54:47.333216 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:54:47.336223 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:54:47.336960 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 23:54:47.337146 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 12 23:54:47.358123 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:54:47.358160 systemd-tmpfiles[1721]: Skipping /boot Sep 12 23:54:47.388925 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Sep 12 23:54:47.398851 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:54:47.398890 systemd-tmpfiles[1721]: Skipping /boot Sep 12 23:54:47.523399 zram_generator::config[1748]: No configuration found. Sep 12 23:54:47.733657 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:54:47.913520 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1765) Sep 12 23:54:47.975803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:54:48.127683 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:54:48.128733 systemd[1]: Reloading finished in 819 ms. Sep 12 23:54:48.162605 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:54:48.177413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:54:48.302331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:54:48.316438 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:54:48.335902 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:54:48.349889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:54:48.357929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:54:48.364856 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:54:48.371906 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:54:48.382947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:54:48.391845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:54:48.395682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:54:48.406902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:54:48.421164 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:54:48.449796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:54:48.470664 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:54:48.474434 lvm[1919]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:54:48.483517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:54:48.510644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:54:48.526018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:54:48.529658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:54:48.534496 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:54:48.535484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:54:48.554563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:54:48.556047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:54:48.573346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:54:48.584520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:54:48.591981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:54:48.600861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:54:48.603454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:54:48.610056 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:54:48.616635 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:54:48.624558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:54:48.645488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:54:48.661981 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:54:48.668450 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:54:48.689353 lvm[1951]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:54:48.696400 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:54:48.704289 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:54:48.724140 systemd[1]: Finished ensure-sysext.service. Sep 12 23:54:48.735459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:54:48.737439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:54:48.740836 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:54:48.747085 augenrules[1960]: No rules Sep 12 23:54:48.751958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:54:48.759778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:54:48.764768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:54:48.764898 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:54:48.778527 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:54:48.780799 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:54:48.781729 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:54:48.788240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:54:48.788588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:54:48.791883 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:54:48.792211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:54:48.799102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:54:48.799279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:54:48.805189 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:54:48.805576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:54:48.841027 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:54:48.865490 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:54:48.880960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:54:48.982010 systemd-networkd[1930]: lo: Link UP Sep 12 23:54:48.982031 systemd-networkd[1930]: lo: Gained carrier Sep 12 23:54:48.984880 systemd-networkd[1930]: Enumeration completed Sep 12 23:54:48.985110 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:54:48.988756 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:48.988764 systemd-networkd[1930]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:54:48.990882 systemd-networkd[1930]: eth0: Link UP Sep 12 23:54:48.991276 systemd-networkd[1930]: eth0: Gained carrier Sep 12 23:54:48.991368 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:54:48.996632 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:54:48.998057 systemd-resolved[1931]: Positive Trust Anchors: Sep 12 23:54:48.998099 systemd-resolved[1931]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:54:48.998166 systemd-resolved[1931]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:54:49.006436 systemd-networkd[1930]: eth0: DHCPv4 address 172.31.25.8/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:54:49.013893 systemd-resolved[1931]: Defaulting to hostname 'linux'. Sep 12 23:54:49.017783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:54:49.020794 systemd[1]: Reached target network.target - Network. Sep 12 23:54:49.023031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:54:49.025939 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:54:49.028871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:54:49.031705 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:54:49.034607 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:54:49.037432 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:54:49.040089 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:54:49.042939 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:54:49.043010 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:54:49.045493 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:54:49.048981 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:54:49.053965 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:54:49.062624 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:54:49.066096 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:54:49.068529 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:54:49.070657 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:54:49.073075 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:54:49.073129 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:54:49.075570 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:54:49.088690 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:54:49.094600 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:54:49.106662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:54:49.115688 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:54:49.118380 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:54:49.123666 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:54:49.131842 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 23:54:49.142559 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 23:54:49.154674 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:54:49.162599 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:54:49.182626 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:54:49.187216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:54:49.190152 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:54:49.196652 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:54:49.203592 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:54:49.254224 jq[1990]: false Sep 12 23:54:49.254767 jq[2001]: true Sep 12 23:54:49.268688 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:54:49.271457 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:54:49.313089 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:54:49.313708 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:54:49.333379 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: ---------------------------------------------------- Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: corporation. Support and training for ntp-4 are Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: available at https://www.nwtime.org/support Sep 12 23:54:49.334400 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: ---------------------------------------------------- Sep 12 23:54:49.333440 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:54:49.333461 ntpd[1993]: ---------------------------------------------------- Sep 12 23:54:49.333480 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:54:49.333499 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:54:49.333521 ntpd[1993]: corporation. Support and training for ntp-4 are Sep 12 23:54:49.333541 ntpd[1993]: available at https://www.nwtime.org/support Sep 12 23:54:49.333561 ntpd[1993]: ---------------------------------------------------- Sep 12 23:54:49.352142 ntpd[1993]: proto: precision = 0.096 usec (-23) Sep 12 23:54:49.352667 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: proto: precision = 0.096 usec (-23) Sep 12 23:54:49.359679 ntpd[1993]: basedate set to 2025-08-31 Sep 12 23:54:49.371343 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: basedate set to 2025-08-31 Sep 12 23:54:49.371343 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: gps base set to 2025-08-31 (week 2382) Sep 12 23:54:49.359729 ntpd[1993]: gps base set to 2025-08-31 (week 2382) Sep 12 23:54:49.380399 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:54:49.380503 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:54:49.380660 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:54:49.380660 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:54:49.381189 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:54:49.400156 jq[2009]: true Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listen normally on 3 eth0 172.31.25.8:123 Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listen normally on 4 lo [::1]:123 Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: bind(21) AF_INET6 fe80::47f:34ff:fe62:842d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: unable to create socket on eth0 (5) for fe80::47f:34ff:fe62:842d%2#123 Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: failed to init interface for address fe80::47f:34ff:fe62:842d%2 Sep 12 23:54:49.401380 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Sep 12 23:54:49.382029 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:54:49.381275 ntpd[1993]: Listen normally on 3 eth0 172.31.25.8:123 Sep 12 23:54:49.384464 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:54:49.383932 dbus-daemon[1989]: [system] SELinux support is enabled Sep 12 23:54:49.393783 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:54:49.389219 ntpd[1993]: Listen normally on 4 lo [::1]:123 Sep 12 23:54:49.393832 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:54:49.389373 ntpd[1993]: bind(21) AF_INET6 fe80::47f:34ff:fe62:842d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:54:49.398480 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:54:49.389419 ntpd[1993]: unable to create socket on eth0 (5) for fe80::47f:34ff:fe62:842d%2#123 Sep 12 23:54:49.398517 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:54:49.389692 ntpd[1993]: failed to init interface for address fe80::47f:34ff:fe62:842d%2 Sep 12 23:54:49.389765 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Sep 12 23:54:49.433341 update_engine[2000]: I20250912 23:54:49.414161 2000 main.cc:92] Flatcar Update Engine starting Sep 12 23:54:49.429086 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 23:54:49.419988 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1930 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 23:54:49.447980 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:54:49.447234 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:54:49.459742 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:54:49.459742 ntpd[1993]: 12 Sep 23:54:49 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:54:49.456619 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:54:49.447328 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:54:49.467855 extend-filesystems[1991]: Found loop4 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found loop5 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found loop6 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found loop7 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p1 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p2 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p3 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found usr Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p4 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p6 Sep 12 23:54:49.467855 extend-filesystems[1991]: Found nvme0n1p7 Sep 12 23:54:49.557368 update_engine[2000]: I20250912 23:54:49.466648 2000 update_check_scheduler.cc:74] Next update check in 5m56s Sep 12 23:54:49.519631 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:54:49.557603 extend-filesystems[1991]: Found nvme0n1p9 Sep 12 23:54:49.557603 extend-filesystems[1991]: Checking size of /dev/nvme0n1p9 Sep 12 23:54:49.522395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:54:49.545003 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 23:54:49.603410 extend-filesystems[1991]: Resized partition /dev/nvme0n1p9 Sep 12 23:54:49.609337 extend-filesystems[2047]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:54:49.636138 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 23:54:49.633248 systemd-logind[1999]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:54:49.640581 systemd-logind[1999]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 23:54:49.643036 systemd-logind[1999]: New seat seat0. Sep 12 23:54:49.647065 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:54:49.721404 coreos-metadata[1988]: Sep 12 23:54:49.720 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.723 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.726 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.726 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.728 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.732 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.732 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.733 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.735 INFO Fetch failed with 404: resource not found Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.736 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.739 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.739 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.741 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.743 INFO Fetch successful Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.744 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 23:54:49.750988 coreos-metadata[1988]: Sep 12 23:54:49.744 INFO Fetch successful Sep 12 23:54:49.772407 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 23:54:49.798368 extend-filesystems[2047]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 23:54:49.798368 extend-filesystems[2047]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:54:49.798368 extend-filesystems[2047]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 23:54:49.809381 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:54:49.837027 bash[2054]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:54:49.837210 extend-filesystems[1991]: Resized filesystem in /dev/nvme0n1p9 Sep 12 23:54:49.814906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:54:49.824504 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:54:49.871331 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1768) Sep 12 23:54:49.874660 systemd[1]: Starting sshkeys.service... Sep 12 23:54:49.881448 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:54:49.887291 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:54:49.922921 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:54:49.941179 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:54:50.012967 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 23:54:50.019777 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 23:54:50.031936 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2028 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 23:54:50.100216 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 23:54:50.126960 polkitd[2098]: Started polkitd version 121 Sep 12 23:54:50.145596 polkitd[2098]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 23:54:50.145865 polkitd[2098]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 23:54:50.148362 polkitd[2098]: Finished loading, compiling and executing 2 rules Sep 12 23:54:50.154558 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 23:54:50.158399 polkitd[2098]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 23:54:50.154879 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 23:54:50.202700 systemd-hostnamed[2028]: Hostname set to (transient) Sep 12 23:54:50.205404 systemd-resolved[1931]: System hostname changed to 'ip-172-31-25-8'. Sep 12 23:54:50.270342 coreos-metadata[2075]: Sep 12 23:54:50.270 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:54:50.275090 coreos-metadata[2075]: Sep 12 23:54:50.274 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 23:54:50.277174 coreos-metadata[2075]: Sep 12 23:54:50.276 INFO Fetch successful Sep 12 23:54:50.277174 coreos-metadata[2075]: Sep 12 23:54:50.276 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 23:54:50.283399 coreos-metadata[2075]: Sep 12 23:54:50.282 INFO Fetch successful Sep 12 23:54:50.293977 unknown[2075]: wrote ssh authorized keys file for user: core Sep 12 23:54:50.323985 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:54:50.337915 ntpd[1993]: bind(24) AF_INET6 fe80::47f:34ff:fe62:842d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:54:50.339433 ntpd[1993]: 12 Sep 23:54:50 ntpd[1993]: bind(24) AF_INET6 fe80::47f:34ff:fe62:842d%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 23:54:50.339433 ntpd[1993]: 12 Sep 23:54:50 ntpd[1993]: unable to create socket on eth0 (6) for fe80::47f:34ff:fe62:842d%2#123 Sep 12 23:54:50.339433 ntpd[1993]: 12 Sep 23:54:50 ntpd[1993]: failed to init interface for address fe80::47f:34ff:fe62:842d%2 Sep 12 23:54:50.337984 ntpd[1993]: unable to create socket on eth0 (6) for fe80::47f:34ff:fe62:842d%2#123 Sep 12 23:54:50.338015 ntpd[1993]: failed to init interface for address fe80::47f:34ff:fe62:842d%2 Sep 12 23:54:50.347718 update-ssh-keys[2169]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:54:50.354055 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:54:50.364203 systemd[1]: Finished sshkeys.service. Sep 12 23:54:50.371983 containerd[2018]: time="2025-09-12T23:54:50.371835824Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:54:50.510452 containerd[2018]: time="2025-09-12T23:54:50.510374637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513081453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513164301Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513201369Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513519285Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513555321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513683877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.513713529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.514015917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.514047465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.514085673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515333 containerd[2018]: time="2025-09-12T23:54:50.514111245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.514270989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.514699485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.514884105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.514914249Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.515069361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:54:50.515870 containerd[2018]: time="2025-09-12T23:54:50.515177781Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:54:50.525534 containerd[2018]: time="2025-09-12T23:54:50.525469581Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:54:50.525647 containerd[2018]: time="2025-09-12T23:54:50.525578589Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:54:50.525747 containerd[2018]: time="2025-09-12T23:54:50.525701397Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:54:50.525836 containerd[2018]: time="2025-09-12T23:54:50.525756633Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:54:50.525836 containerd[2018]: time="2025-09-12T23:54:50.525795417Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:54:50.526114 containerd[2018]: time="2025-09-12T23:54:50.526066257Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:54:50.526823 containerd[2018]: time="2025-09-12T23:54:50.526774293Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:54:50.527066 containerd[2018]: time="2025-09-12T23:54:50.527020965Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:54:50.527126 containerd[2018]: time="2025-09-12T23:54:50.527068773Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:54:50.527126 containerd[2018]: time="2025-09-12T23:54:50.527102157Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:54:50.527239 containerd[2018]: time="2025-09-12T23:54:50.527134905Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527239 containerd[2018]: time="2025-09-12T23:54:50.527168037Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527239 containerd[2018]: time="2025-09-12T23:54:50.527199489Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527409 containerd[2018]: time="2025-09-12T23:54:50.527231697Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527409 containerd[2018]: time="2025-09-12T23:54:50.527271885Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527409 containerd[2018]: time="2025-09-12T23:54:50.527302341Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527409 containerd[2018]: time="2025-09-12T23:54:50.527385657Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527417685Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527461497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527494029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527524629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527556237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527600 containerd[2018]: time="2025-09-12T23:54:50.527586489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527621793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527650869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527683737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527714085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527749701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527788581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.527863 containerd[2018]: time="2025-09-12T23:54:50.527820405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.531874257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.532065597Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.532894749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.532930929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.532960017Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:54:50.533328 containerd[2018]: time="2025-09-12T23:54:50.533184345Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533476641Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533512665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533546169Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533571333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533601189Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533624481Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:54:50.533714 containerd[2018]: time="2025-09-12T23:54:50.533649633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:54:50.534445 containerd[2018]: time="2025-09-12T23:54:50.534271425Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:54:50.534445 containerd[2018]: time="2025-09-12T23:54:50.534450381Z" level=info msg="Connect containerd service" Sep 12 23:54:50.534742 containerd[2018]: time="2025-09-12T23:54:50.534507645Z" level=info msg="using legacy CRI server" Sep 12 23:54:50.534742 containerd[2018]: time="2025-09-12T23:54:50.534525309Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:54:50.534742 containerd[2018]: time="2025-09-12T23:54:50.534683685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.535968609Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536702817Z" level=info msg="Start subscribing containerd event" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536782101Z" level=info msg="Start recovering state" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536896521Z" level=info msg="Start event monitor" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536918889Z" level=info msg="Start snapshots syncer" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536942601Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.536961285Z" level=info msg="Start streaming server" Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.537907329Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:54:50.538330 containerd[2018]: time="2025-09-12T23:54:50.538083789Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:54:50.541615 containerd[2018]: time="2025-09-12T23:54:50.539441781Z" level=info msg="containerd successfully booted in 0.181268s" Sep 12 23:54:50.539568 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:54:51.014651 systemd-networkd[1930]: eth0: Gained IPv6LL Sep 12 23:54:51.022471 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:54:51.027233 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:54:51.041504 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 23:54:51.056724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:54:51.066443 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:54:51.160673 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:54:51.181353 amazon-ssm-agent[2189]: Initializing new seelog logger Sep 12 23:54:51.181353 amazon-ssm-agent[2189]: New Seelog Logger Creation Complete Sep 12 23:54:51.181353 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.181353 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.181353 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 processing appconfig overrides Sep 12 23:54:51.182075 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.182075 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.182075 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 processing appconfig overrides Sep 12 23:54:51.182212 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.182212 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.182292 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 processing appconfig overrides Sep 12 23:54:51.184245 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO Proxy environment variables: Sep 12 23:54:51.187399 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.187399 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:54:51.187399 amazon-ssm-agent[2189]: 2025/09/12 23:54:51 processing appconfig overrides Sep 12 23:54:51.284733 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO https_proxy: Sep 12 23:54:51.385392 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO http_proxy: Sep 12 23:54:51.486920 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO no_proxy: Sep 12 23:54:51.583508 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO Checking if agent identity type OnPrem can be assumed Sep 12 23:54:51.642516 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:54:51.683451 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO Checking if agent identity type EC2 can be assumed Sep 12 23:54:51.784328 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO Agent will take identity from EC2 Sep 12 23:54:51.883087 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:54:51.982462 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:54:52.081909 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:54:52.182241 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [Registrar] Starting registrar module Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:52 INFO [EC2Identity] EC2 registration was successful. Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:52 INFO [CredentialRefresher] credentialRefresher has started Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:52 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 23:54:52.185369 amazon-ssm-agent[2189]: 2025-09-12 23:54:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 23:54:52.280982 amazon-ssm-agent[2189]: 2025-09-12 23:54:52 INFO [CredentialRefresher] Next credential rotation will be in 30.308323448333333 minutes Sep 12 23:54:52.494635 sshd_keygen[2006]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:54:52.535822 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:54:52.548758 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:54:52.561686 systemd[1]: Started sshd@0-172.31.25.8:22-147.75.109.163:57968.service - OpenSSH per-connection server daemon (147.75.109.163:57968). Sep 12 23:54:52.582172 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:54:52.583923 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:54:52.599855 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:54:52.634416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:54:52.646861 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:54:52.659671 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:54:52.664085 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:54:52.757522 sshd[2218]: Accepted publickey for core from 147.75.109.163 port 57968 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:52.760605 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:52.782659 systemd-logind[1999]: New session 1 of user core. Sep 12 23:54:52.785855 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:54:52.801791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:54:52.827860 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:54:52.841925 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:54:52.858732 (systemd)[2229]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:54:53.096641 systemd[2229]: Queued start job for default target default.target. Sep 12 23:54:53.105682 systemd[2229]: Created slice app.slice - User Application Slice. Sep 12 23:54:53.106101 systemd[2229]: Reached target paths.target - Paths. Sep 12 23:54:53.106157 systemd[2229]: Reached target timers.target - Timers. Sep 12 23:54:53.109301 systemd[2229]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:54:53.154244 systemd[2229]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:54:53.154569 systemd[2229]: Reached target sockets.target - Sockets. Sep 12 23:54:53.154627 systemd[2229]: Reached target basic.target - Basic System. Sep 12 23:54:53.154713 systemd[2229]: Reached target default.target - Main User Target. Sep 12 23:54:53.154782 systemd[2229]: Startup finished in 283ms. Sep 12 23:54:53.155126 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:54:53.166654 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:54:53.213673 amazon-ssm-agent[2189]: 2025-09-12 23:54:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 23:54:53.314563 amazon-ssm-agent[2189]: 2025-09-12 23:54:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2239) started Sep 12 23:54:53.337924 ntpd[1993]: Listen normally on 7 eth0 [fe80::47f:34ff:fe62:842d%2]:123 Sep 12 23:54:53.338419 ntpd[1993]: 12 Sep 23:54:53 ntpd[1993]: Listen normally on 7 eth0 [fe80::47f:34ff:fe62:842d%2]:123 Sep 12 23:54:53.417133 amazon-ssm-agent[2189]: 2025-09-12 23:54:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 23:54:53.541586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:54:53.542055 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:54:53.545415 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:54:53.548478 systemd[1]: Startup finished in 1.207s (kernel) + 7.208s (initrd) + 9.574s (userspace) = 17.989s. Sep 12 23:54:54.812014 kubelet[2254]: E0912 23:54:54.811943 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:54:54.816831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:54:54.817193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:54:54.819460 systemd[1]: kubelet.service: Consumed 1.455s CPU time. Sep 12 23:54:56.730345 systemd-resolved[1931]: Clock change detected. Flushing caches. Sep 12 23:54:58.718079 systemd[1]: Started sshd@1-172.31.25.8:22-147.75.109.163:57990.service - OpenSSH per-connection server daemon (147.75.109.163:57990). Sep 12 23:54:58.891073 sshd[2268]: Accepted publickey for core from 147.75.109.163 port 57990 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:58.893871 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:58.903349 systemd-logind[1999]: New session 2 of user core. Sep 12 23:54:58.910873 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:54:59.039912 sshd[2268]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:59.047342 systemd[1]: sshd@1-172.31.25.8:22-147.75.109.163:57990.service: Deactivated successfully. Sep 12 23:54:59.051675 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:54:59.053075 systemd-logind[1999]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:54:59.055627 systemd-logind[1999]: Removed session 2. Sep 12 23:54:59.081114 systemd[1]: Started sshd@2-172.31.25.8:22-147.75.109.163:57994.service - OpenSSH per-connection server daemon (147.75.109.163:57994). Sep 12 23:54:59.255819 sshd[2275]: Accepted publickey for core from 147.75.109.163 port 57994 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:59.258639 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:59.269934 systemd-logind[1999]: New session 3 of user core. Sep 12 23:54:59.281848 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:54:59.400555 sshd[2275]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:59.408155 systemd[1]: sshd@2-172.31.25.8:22-147.75.109.163:57994.service: Deactivated successfully. Sep 12 23:54:59.412798 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:54:59.414504 systemd-logind[1999]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:54:59.417002 systemd-logind[1999]: Removed session 3. Sep 12 23:54:59.441052 systemd[1]: Started sshd@3-172.31.25.8:22-147.75.109.163:58010.service - OpenSSH per-connection server daemon (147.75.109.163:58010). Sep 12 23:54:59.617286 sshd[2282]: Accepted publickey for core from 147.75.109.163 port 58010 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:59.620130 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:59.630812 systemd-logind[1999]: New session 4 of user core. Sep 12 23:54:59.637911 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:54:59.766992 sshd[2282]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:59.774866 systemd-logind[1999]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:54:59.776638 systemd[1]: sshd@3-172.31.25.8:22-147.75.109.163:58010.service: Deactivated successfully. Sep 12 23:54:59.780076 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:54:59.782139 systemd-logind[1999]: Removed session 4. Sep 12 23:54:59.806091 systemd[1]: Started sshd@4-172.31.25.8:22-147.75.109.163:58018.service - OpenSSH per-connection server daemon (147.75.109.163:58018). Sep 12 23:54:59.979874 sshd[2289]: Accepted publickey for core from 147.75.109.163 port 58018 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:54:59.982503 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:54:59.991325 systemd-logind[1999]: New session 5 of user core. Sep 12 23:55:00.002863 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:55:00.123847 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:55:00.124576 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:00.142647 sudo[2292]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:00.166586 sshd[2289]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:00.174609 systemd[1]: sshd@4-172.31.25.8:22-147.75.109.163:58018.service: Deactivated successfully. Sep 12 23:55:00.180007 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:55:00.182070 systemd-logind[1999]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:55:00.184295 systemd-logind[1999]: Removed session 5. Sep 12 23:55:00.210047 systemd[1]: Started sshd@5-172.31.25.8:22-147.75.109.163:34148.service - OpenSSH per-connection server daemon (147.75.109.163:34148). Sep 12 23:55:00.377694 sshd[2297]: Accepted publickey for core from 147.75.109.163 port 34148 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:00.380093 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:00.388768 systemd-logind[1999]: New session 6 of user core. Sep 12 23:55:00.400866 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:55:00.507608 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:55:00.508349 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:00.515449 sudo[2301]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:00.526828 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:55:00.527571 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:00.551107 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:00.566710 auditctl[2304]: No rules Sep 12 23:55:00.567625 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:55:00.568674 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:00.578381 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:55:00.638939 augenrules[2322]: No rules Sep 12 23:55:00.640873 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:55:00.643497 sudo[2300]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:00.667941 sshd[2297]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:00.674389 systemd[1]: sshd@5-172.31.25.8:22-147.75.109.163:34148.service: Deactivated successfully. Sep 12 23:55:00.678007 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:55:00.679748 systemd-logind[1999]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:55:00.681806 systemd-logind[1999]: Removed session 6. Sep 12 23:55:00.709121 systemd[1]: Started sshd@6-172.31.25.8:22-147.75.109.163:34156.service - OpenSSH per-connection server daemon (147.75.109.163:34156). Sep 12 23:55:00.884222 sshd[2330]: Accepted publickey for core from 147.75.109.163 port 34156 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:00.887007 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:00.896241 systemd-logind[1999]: New session 7 of user core. Sep 12 23:55:00.903892 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:55:01.009753 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:55:01.010443 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:55:02.181914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:02.182276 systemd[1]: kubelet.service: Consumed 1.455s CPU time. Sep 12 23:55:02.193047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:02.264804 systemd[1]: Reloading requested from client PID 2366 ('systemctl') (unit session-7.scope)... Sep 12 23:55:02.264839 systemd[1]: Reloading... Sep 12 23:55:02.514672 zram_generator::config[2409]: No configuration found. Sep 12 23:55:02.759288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:55:02.932719 systemd[1]: Reloading finished in 667 ms. Sep 12 23:55:03.029052 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:55:03.029261 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:55:03.029840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:03.037424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:55:03.364470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:55:03.379079 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:55:03.451205 kubelet[2469]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:03.451205 kubelet[2469]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:55:03.451205 kubelet[2469]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:55:03.451811 kubelet[2469]: I0912 23:55:03.451305 2469 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:55:04.158572 kubelet[2469]: I0912 23:55:04.157713 2469 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:55:04.158572 kubelet[2469]: I0912 23:55:04.157760 2469 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:55:04.158572 kubelet[2469]: I0912 23:55:04.158184 2469 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:55:04.207314 kubelet[2469]: I0912 23:55:04.207254 2469 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:55:04.225311 kubelet[2469]: E0912 23:55:04.225229 2469 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:55:04.225311 kubelet[2469]: I0912 23:55:04.225309 2469 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:55:04.232696 kubelet[2469]: I0912 23:55:04.232639 2469 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:55:04.234562 kubelet[2469]: I0912 23:55:04.234395 2469 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:55:04.234820 kubelet[2469]: I0912 23:55:04.234743 2469 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:55:04.235139 kubelet[2469]: I0912 23:55:04.234813 2469 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.25.8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:55:04.235321 kubelet[2469]: I0912 23:55:04.235274 2469 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:55:04.235321 kubelet[2469]: I0912 23:55:04.235302 2469 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:55:04.236457 kubelet[2469]: I0912 23:55:04.235860 2469 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:04.237874 kubelet[2469]: I0912 23:55:04.237811 2469 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:55:04.237874 kubelet[2469]: I0912 23:55:04.237879 2469 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:55:04.238040 kubelet[2469]: I0912 23:55:04.237920 2469 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:55:04.238157 kubelet[2469]: I0912 23:55:04.238118 2469 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:55:04.239233 kubelet[2469]: E0912 23:55:04.239187 2469 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:04.239475 kubelet[2469]: E0912 23:55:04.239433 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:04.246594 kubelet[2469]: I0912 23:55:04.245334 2469 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:55:04.246891 kubelet[2469]: I0912 23:55:04.246826 2469 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:55:04.247122 kubelet[2469]: W0912 23:55:04.247067 2469 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:55:04.249213 kubelet[2469]: I0912 23:55:04.249152 2469 server.go:1274] "Started kubelet" Sep 12 23:55:04.255552 kubelet[2469]: W0912 23:55:04.254629 2469 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 12 23:55:04.255552 kubelet[2469]: E0912 23:55:04.254742 2469 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 12 23:55:04.255552 kubelet[2469]: W0912 23:55:04.254887 2469 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.25.8" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 12 23:55:04.255552 kubelet[2469]: E0912 23:55:04.254919 2469 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.25.8\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 12 23:55:04.256657 kubelet[2469]: I0912 23:55:04.256597 2469 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:55:04.257764 kubelet[2469]: I0912 23:55:04.257662 2469 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:55:04.258295 kubelet[2469]: I0912 23:55:04.258237 2469 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:55:04.260041 kubelet[2469]: I0912 23:55:04.259993 2469 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:55:04.264397 kubelet[2469]: I0912 23:55:04.264315 2469 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:55:04.268012 kubelet[2469]: I0912 23:55:04.267927 2469 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:55:04.273846 kubelet[2469]: I0912 23:55:04.273773 2469 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:55:04.274425 kubelet[2469]: E0912 23:55:04.274182 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.276328 kubelet[2469]: I0912 23:55:04.276255 2469 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:55:04.276504 kubelet[2469]: I0912 23:55:04.276404 2469 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:55:04.281867 kubelet[2469]: I0912 23:55:04.281202 2469 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:55:04.281867 kubelet[2469]: I0912 23:55:04.281446 2469 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:55:04.292637 kubelet[2469]: I0912 23:55:04.291876 2469 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:55:04.293940 kubelet[2469]: E0912 23:55:04.293884 2469 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:55:04.321607 kubelet[2469]: E0912 23:55:04.321563 2469 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.25.8\" not found" node="172.31.25.8" Sep 12 23:55:04.330351 kubelet[2469]: I0912 23:55:04.330315 2469 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:55:04.331000 kubelet[2469]: I0912 23:55:04.330955 2469 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:55:04.331217 kubelet[2469]: I0912 23:55:04.331193 2469 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:55:04.337227 kubelet[2469]: I0912 23:55:04.337183 2469 policy_none.go:49] "None policy: Start" Sep 12 23:55:04.339763 kubelet[2469]: I0912 23:55:04.339694 2469 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:55:04.339763 kubelet[2469]: I0912 23:55:04.339747 2469 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:55:04.363399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:55:04.374761 kubelet[2469]: E0912 23:55:04.374694 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.389279 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:55:04.397616 kubelet[2469]: I0912 23:55:04.397066 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:55:04.403317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:55:04.408688 kubelet[2469]: I0912 23:55:04.408068 2469 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:55:04.408688 kubelet[2469]: I0912 23:55:04.408118 2469 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:55:04.408688 kubelet[2469]: I0912 23:55:04.408147 2469 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:55:04.408688 kubelet[2469]: E0912 23:55:04.408215 2469 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:55:04.421357 kubelet[2469]: I0912 23:55:04.421312 2469 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:55:04.423067 kubelet[2469]: I0912 23:55:04.423023 2469 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:55:04.423478 kubelet[2469]: I0912 23:55:04.423383 2469 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:55:04.424566 kubelet[2469]: I0912 23:55:04.424104 2469 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:55:04.430121 kubelet[2469]: E0912 23:55:04.430082 2469 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.8\" not found" Sep 12 23:55:04.528101 kubelet[2469]: I0912 23:55:04.527467 2469 kubelet_node_status.go:72] "Attempting to register node" node="172.31.25.8" Sep 12 23:55:04.571815 kubelet[2469]: I0912 23:55:04.571761 2469 kubelet_node_status.go:75] "Successfully registered node" node="172.31.25.8" Sep 12 23:55:04.571815 kubelet[2469]: E0912 23:55:04.571821 2469 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.8\": node \"172.31.25.8\" not found" Sep 12 23:55:04.629730 kubelet[2469]: E0912 23:55:04.629635 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.730059 kubelet[2469]: E0912 23:55:04.729898 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.830150 kubelet[2469]: E0912 23:55:04.830081 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.931039 kubelet[2469]: E0912 23:55:04.930944 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:04.935261 sudo[2333]: pam_unix(sudo:session): session closed for user root Sep 12 23:55:04.960845 sshd[2330]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:04.967641 systemd[1]: sshd@6-172.31.25.8:22-147.75.109.163:34156.service: Deactivated successfully. Sep 12 23:55:04.973312 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:55:04.976768 systemd-logind[1999]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:55:04.980142 systemd-logind[1999]: Removed session 7. Sep 12 23:55:05.032121 kubelet[2469]: E0912 23:55:05.032056 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:05.132949 kubelet[2469]: E0912 23:55:05.132884 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:05.162343 kubelet[2469]: I0912 23:55:05.162261 2469 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 12 23:55:05.162875 kubelet[2469]: W0912 23:55:05.162832 2469 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 12 23:55:05.163016 kubelet[2469]: W0912 23:55:05.162935 2469 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 12 23:55:05.233237 kubelet[2469]: E0912 23:55:05.233084 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:05.240651 kubelet[2469]: E0912 23:55:05.240575 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:05.334251 kubelet[2469]: E0912 23:55:05.334188 2469 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.8\" not found" Sep 12 23:55:05.436215 kubelet[2469]: I0912 23:55:05.436158 2469 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 12 23:55:05.437323 containerd[2018]: time="2025-09-12T23:55:05.436713799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:55:05.437910 kubelet[2469]: I0912 23:55:05.437042 2469 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 12 23:55:06.239686 kubelet[2469]: I0912 23:55:06.239623 2469 apiserver.go:52] "Watching apiserver" Sep 12 23:55:06.240769 kubelet[2469]: E0912 23:55:06.240721 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:06.247610 kubelet[2469]: E0912 23:55:06.246570 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:06.259258 systemd[1]: Created slice kubepods-besteffort-pod55f81d42_b17a_4b48_93e1_45ffedd14e9a.slice - libcontainer container kubepods-besteffort-pod55f81d42_b17a_4b48_93e1_45ffedd14e9a.slice. Sep 12 23:55:06.281292 kubelet[2469]: I0912 23:55:06.281257 2469 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:55:06.288765 systemd[1]: Created slice kubepods-besteffort-pod54914f00_c291_4c15_b4a9_efa3d9f17293.slice - libcontainer container kubepods-besteffort-pod54914f00_c291_4c15_b4a9_efa3d9f17293.slice. Sep 12 23:55:06.290642 kubelet[2469]: I0912 23:55:06.290418 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-xtables-lock\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.290642 kubelet[2469]: I0912 23:55:06.290503 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55f81d42-b17a-4b48-93e1-45ffedd14e9a-kube-proxy\") pod \"kube-proxy-snx4w\" (UID: \"55f81d42-b17a-4b48-93e1-45ffedd14e9a\") " pod="kube-system/kube-proxy-snx4w" Sep 12 23:55:06.290642 kubelet[2469]: I0912 23:55:06.290593 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54914f00-c291-4c15-b4a9-efa3d9f17293-node-certs\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.291674 kubelet[2469]: I0912 23:55:06.290960 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpct\" (UniqueName: \"kubernetes.io/projected/55f81d42-b17a-4b48-93e1-45ffedd14e9a-kube-api-access-xkpct\") pod \"kube-proxy-snx4w\" (UID: \"55f81d42-b17a-4b48-93e1-45ffedd14e9a\") " pod="kube-system/kube-proxy-snx4w" Sep 12 23:55:06.291674 kubelet[2469]: I0912 23:55:06.291052 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-cni-net-dir\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.291674 kubelet[2469]: I0912 23:55:06.291120 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-policysync\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.291674 kubelet[2469]: I0912 23:55:06.291161 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-var-lib-calico\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.291674 kubelet[2469]: I0912 23:55:06.291228 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d02977a6-a9b2-4f00-a209-ffd36b3b9de2-socket-dir\") pod \"csi-node-driver-sm4qz\" (UID: \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\") " pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:06.292037 kubelet[2469]: I0912 23:55:06.291268 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jksq\" (UniqueName: \"kubernetes.io/projected/d02977a6-a9b2-4f00-a209-ffd36b3b9de2-kube-api-access-7jksq\") pod \"csi-node-driver-sm4qz\" (UID: \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\") " pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:06.292037 kubelet[2469]: I0912 23:55:06.291321 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d02977a6-a9b2-4f00-a209-ffd36b3b9de2-varrun\") pod \"csi-node-driver-sm4qz\" (UID: \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\") " pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:06.292037 kubelet[2469]: I0912 23:55:06.291361 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f81d42-b17a-4b48-93e1-45ffedd14e9a-xtables-lock\") pod \"kube-proxy-snx4w\" (UID: \"55f81d42-b17a-4b48-93e1-45ffedd14e9a\") " pod="kube-system/kube-proxy-snx4w" Sep 12 23:55:06.292037 kubelet[2469]: I0912 23:55:06.291397 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f81d42-b17a-4b48-93e1-45ffedd14e9a-lib-modules\") pod \"kube-proxy-snx4w\" (UID: \"55f81d42-b17a-4b48-93e1-45ffedd14e9a\") " pod="kube-system/kube-proxy-snx4w" Sep 12 23:55:06.292037 kubelet[2469]: I0912 23:55:06.291452 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-cni-bin-dir\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292277 kubelet[2469]: I0912 23:55:06.291492 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-cni-log-dir\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292277 kubelet[2469]: I0912 23:55:06.291693 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-flexvol-driver-host\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292277 kubelet[2469]: I0912 23:55:06.291772 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d02977a6-a9b2-4f00-a209-ffd36b3b9de2-kubelet-dir\") pod \"csi-node-driver-sm4qz\" (UID: \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\") " pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:06.292277 kubelet[2469]: I0912 23:55:06.291821 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d02977a6-a9b2-4f00-a209-ffd36b3b9de2-registration-dir\") pod \"csi-node-driver-sm4qz\" (UID: \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\") " pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:06.292277 kubelet[2469]: I0912 23:55:06.291882 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-lib-modules\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292507 kubelet[2469]: I0912 23:55:06.291946 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfmg\" (UniqueName: \"kubernetes.io/projected/54914f00-c291-4c15-b4a9-efa3d9f17293-kube-api-access-8hfmg\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292507 kubelet[2469]: I0912 23:55:06.291984 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54914f00-c291-4c15-b4a9-efa3d9f17293-tigera-ca-bundle\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.292507 kubelet[2469]: I0912 23:55:06.292047 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54914f00-c291-4c15-b4a9-efa3d9f17293-var-run-calico\") pod \"calico-node-2z5f9\" (UID: \"54914f00-c291-4c15-b4a9-efa3d9f17293\") " pod="calico-system/calico-node-2z5f9" Sep 12 23:55:06.404859 kubelet[2469]: E0912 23:55:06.404807 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:06.404859 kubelet[2469]: W0912 23:55:06.404848 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:06.405050 kubelet[2469]: E0912 23:55:06.404915 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:06.429899 kubelet[2469]: E0912 23:55:06.429848 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:06.429899 kubelet[2469]: W0912 23:55:06.429886 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:06.430056 kubelet[2469]: E0912 23:55:06.429922 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:06.434540 kubelet[2469]: E0912 23:55:06.431756 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:06.434540 kubelet[2469]: W0912 23:55:06.431794 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:06.434540 kubelet[2469]: E0912 23:55:06.431825 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:06.441706 kubelet[2469]: E0912 23:55:06.441670 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:06.441893 kubelet[2469]: W0912 23:55:06.441867 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:06.442002 kubelet[2469]: E0912 23:55:06.441979 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:06.585983 containerd[2018]: time="2025-09-12T23:55:06.585804296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snx4w,Uid:55f81d42-b17a-4b48-93e1-45ffedd14e9a,Namespace:kube-system,Attempt:0,}" Sep 12 23:55:06.596611 containerd[2018]: time="2025-09-12T23:55:06.596418033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2z5f9,Uid:54914f00-c291-4c15-b4a9-efa3d9f17293,Namespace:calico-system,Attempt:0,}" Sep 12 23:55:07.198464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699529550.mount: Deactivated successfully. Sep 12 23:55:07.213337 containerd[2018]: time="2025-09-12T23:55:07.213255476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:07.218006 containerd[2018]: time="2025-09-12T23:55:07.217931636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:07.220949 containerd[2018]: time="2025-09-12T23:55:07.220887512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:55:07.222944 containerd[2018]: time="2025-09-12T23:55:07.222908936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:55:07.225856 containerd[2018]: time="2025-09-12T23:55:07.225555068Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:07.234555 containerd[2018]: time="2025-09-12T23:55:07.233015804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:55:07.235084 containerd[2018]: time="2025-09-12T23:55:07.235018016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.101688ms" Sep 12 23:55:07.236287 containerd[2018]: time="2025-09-12T23:55:07.236219468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.664191ms" Sep 12 23:55:07.241162 kubelet[2469]: E0912 23:55:07.241119 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:07.435359 containerd[2018]: time="2025-09-12T23:55:07.434786013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:07.435359 containerd[2018]: time="2025-09-12T23:55:07.434862909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:07.435359 containerd[2018]: time="2025-09-12T23:55:07.434901285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.435359 containerd[2018]: time="2025-09-12T23:55:07.435040077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.438892 containerd[2018]: time="2025-09-12T23:55:07.434110569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:07.438892 containerd[2018]: time="2025-09-12T23:55:07.434265345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:07.438892 containerd[2018]: time="2025-09-12T23:55:07.434426229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.438892 containerd[2018]: time="2025-09-12T23:55:07.434767473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.584839 systemd[1]: Started cri-containerd-01fd5e2b148e1c82538edf0d6501228ba69e720088203845223d7478295d2f9d.scope - libcontainer container 01fd5e2b148e1c82538edf0d6501228ba69e720088203845223d7478295d2f9d. Sep 12 23:55:07.588555 systemd[1]: Started cri-containerd-f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172.scope - libcontainer container f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172. Sep 12 23:55:07.656248 containerd[2018]: time="2025-09-12T23:55:07.656164630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snx4w,Uid:55f81d42-b17a-4b48-93e1-45ffedd14e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"01fd5e2b148e1c82538edf0d6501228ba69e720088203845223d7478295d2f9d\"" Sep 12 23:55:07.664997 containerd[2018]: time="2025-09-12T23:55:07.664577542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2z5f9,Uid:54914f00-c291-4c15-b4a9-efa3d9f17293,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\"" Sep 12 23:55:07.666907 containerd[2018]: time="2025-09-12T23:55:07.666573790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:55:08.243557 kubelet[2469]: E0912 23:55:08.242885 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:08.414137 kubelet[2469]: E0912 23:55:08.411251 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:09.025709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016667329.mount: Deactivated successfully. Sep 12 23:55:09.243882 kubelet[2469]: E0912 23:55:09.243785 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:09.604949 containerd[2018]: time="2025-09-12T23:55:09.603570959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:09.604949 containerd[2018]: time="2025-09-12T23:55:09.604886459Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 23:55:09.605805 containerd[2018]: time="2025-09-12T23:55:09.605747291Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:09.610630 containerd[2018]: time="2025-09-12T23:55:09.610558283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:09.612195 containerd[2018]: time="2025-09-12T23:55:09.612142691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.945504797s" Sep 12 23:55:09.612352 containerd[2018]: time="2025-09-12T23:55:09.612321827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 23:55:09.614294 containerd[2018]: time="2025-09-12T23:55:09.614211923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:55:09.617423 containerd[2018]: time="2025-09-12T23:55:09.617360904Z" level=info msg="CreateContainer within sandbox \"01fd5e2b148e1c82538edf0d6501228ba69e720088203845223d7478295d2f9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:55:09.640457 containerd[2018]: time="2025-09-12T23:55:09.640381788Z" level=info msg="CreateContainer within sandbox \"01fd5e2b148e1c82538edf0d6501228ba69e720088203845223d7478295d2f9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68b9dbf097a351c36a5c3400e1dbe3f0d8d77f65dedf3dcdbad782016092b30a\"" Sep 12 23:55:09.641688 containerd[2018]: time="2025-09-12T23:55:09.641638608Z" level=info msg="StartContainer for \"68b9dbf097a351c36a5c3400e1dbe3f0d8d77f65dedf3dcdbad782016092b30a\"" Sep 12 23:55:09.712824 systemd[1]: Started cri-containerd-68b9dbf097a351c36a5c3400e1dbe3f0d8d77f65dedf3dcdbad782016092b30a.scope - libcontainer container 68b9dbf097a351c36a5c3400e1dbe3f0d8d77f65dedf3dcdbad782016092b30a. Sep 12 23:55:09.766467 containerd[2018]: time="2025-09-12T23:55:09.766332324Z" level=info msg="StartContainer for \"68b9dbf097a351c36a5c3400e1dbe3f0d8d77f65dedf3dcdbad782016092b30a\" returns successfully" Sep 12 23:55:10.245768 kubelet[2469]: E0912 23:55:10.244828 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:10.410027 kubelet[2469]: E0912 23:55:10.409470 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:10.477872 kubelet[2469]: I0912 23:55:10.477787 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snx4w" podStartSLOduration=4.527520203 podStartE2EDuration="6.477769332s" podCreationTimestamp="2025-09-12 23:55:04 +0000 UTC" firstStartedPulling="2025-09-12 23:55:07.66370591 +0000 UTC m=+4.277994850" lastFinishedPulling="2025-09-12 23:55:09.613955039 +0000 UTC m=+6.228243979" observedRunningTime="2025-09-12 23:55:10.474425772 +0000 UTC m=+7.088714724" watchObservedRunningTime="2025-09-12 23:55:10.477769332 +0000 UTC m=+7.092058272" Sep 12 23:55:10.512402 kubelet[2469]: E0912 23:55:10.512165 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.512402 kubelet[2469]: W0912 23:55:10.512241 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.512402 kubelet[2469]: E0912 23:55:10.512273 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.513572 kubelet[2469]: E0912 23:55:10.513342 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.513572 kubelet[2469]: W0912 23:55:10.513369 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.513572 kubelet[2469]: E0912 23:55:10.513396 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.514330 kubelet[2469]: E0912 23:55:10.514178 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.514330 kubelet[2469]: W0912 23:55:10.514250 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.514330 kubelet[2469]: E0912 23:55:10.514285 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.516029 kubelet[2469]: E0912 23:55:10.515732 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.516029 kubelet[2469]: W0912 23:55:10.515773 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.516029 kubelet[2469]: E0912 23:55:10.515805 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.516982 kubelet[2469]: E0912 23:55:10.516929 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.517122 kubelet[2469]: W0912 23:55:10.517036 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.517122 kubelet[2469]: E0912 23:55:10.517102 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.518207 kubelet[2469]: E0912 23:55:10.518170 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.518350 kubelet[2469]: W0912 23:55:10.518203 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.518409 kubelet[2469]: E0912 23:55:10.518360 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.519196 kubelet[2469]: E0912 23:55:10.519163 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.519285 kubelet[2469]: W0912 23:55:10.519198 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.519285 kubelet[2469]: E0912 23:55:10.519227 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.520023 kubelet[2469]: E0912 23:55:10.519958 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.520023 kubelet[2469]: W0912 23:55:10.520022 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.520153 kubelet[2469]: E0912 23:55:10.520049 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.520922 kubelet[2469]: E0912 23:55:10.520888 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.520922 kubelet[2469]: W0912 23:55:10.520921 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.520947 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.521303 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.522388 kubelet[2469]: W0912 23:55:10.521319 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.521338 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.521748 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.522388 kubelet[2469]: W0912 23:55:10.521765 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.521782 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.522132 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.522388 kubelet[2469]: W0912 23:55:10.522148 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.522388 kubelet[2469]: E0912 23:55:10.522166 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.523019 kubelet[2469]: E0912 23:55:10.522871 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.523019 kubelet[2469]: W0912 23:55:10.522893 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.523019 kubelet[2469]: E0912 23:55:10.522917 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.523531 kubelet[2469]: E0912 23:55:10.523483 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.523615 kubelet[2469]: W0912 23:55:10.523544 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.523615 kubelet[2469]: E0912 23:55:10.523570 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.523918 kubelet[2469]: E0912 23:55:10.523892 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.523988 kubelet[2469]: W0912 23:55:10.523918 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.523988 kubelet[2469]: E0912 23:55:10.523969 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.524664 kubelet[2469]: E0912 23:55:10.524632 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.524664 kubelet[2469]: W0912 23:55:10.524663 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.524806 kubelet[2469]: E0912 23:55:10.524689 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.525153 kubelet[2469]: E0912 23:55:10.525126 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.525218 kubelet[2469]: W0912 23:55:10.525152 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.525369 kubelet[2469]: E0912 23:55:10.525252 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.525843 kubelet[2469]: E0912 23:55:10.525812 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.525914 kubelet[2469]: W0912 23:55:10.525842 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.525914 kubelet[2469]: E0912 23:55:10.525866 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.526194 kubelet[2469]: E0912 23:55:10.526169 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.526254 kubelet[2469]: W0912 23:55:10.526194 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.526254 kubelet[2469]: E0912 23:55:10.526216 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.526937 kubelet[2469]: E0912 23:55:10.526905 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.526937 kubelet[2469]: W0912 23:55:10.526935 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.527102 kubelet[2469]: E0912 23:55:10.526962 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.527663 kubelet[2469]: E0912 23:55:10.527630 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.527767 kubelet[2469]: W0912 23:55:10.527662 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.527767 kubelet[2469]: E0912 23:55:10.527691 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.528108 kubelet[2469]: E0912 23:55:10.528081 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.528182 kubelet[2469]: W0912 23:55:10.528108 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.528182 kubelet[2469]: E0912 23:55:10.528144 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.528638 kubelet[2469]: E0912 23:55:10.528610 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.528732 kubelet[2469]: W0912 23:55:10.528638 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.528732 kubelet[2469]: E0912 23:55:10.528676 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.529017 kubelet[2469]: E0912 23:55:10.528991 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.529090 kubelet[2469]: W0912 23:55:10.529016 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.529090 kubelet[2469]: E0912 23:55:10.529054 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.529394 kubelet[2469]: E0912 23:55:10.529369 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.529479 kubelet[2469]: W0912 23:55:10.529394 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.529670 kubelet[2469]: E0912 23:55:10.529582 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.530353 kubelet[2469]: E0912 23:55:10.530320 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.530449 kubelet[2469]: W0912 23:55:10.530352 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.530449 kubelet[2469]: E0912 23:55:10.530392 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.530808 kubelet[2469]: E0912 23:55:10.530782 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.530882 kubelet[2469]: W0912 23:55:10.530808 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.530882 kubelet[2469]: E0912 23:55:10.530841 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.531229 kubelet[2469]: E0912 23:55:10.531203 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.531304 kubelet[2469]: W0912 23:55:10.531229 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.531461 kubelet[2469]: E0912 23:55:10.531378 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.531771 kubelet[2469]: E0912 23:55:10.531744 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.531847 kubelet[2469]: W0912 23:55:10.531772 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.531847 kubelet[2469]: E0912 23:55:10.531808 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.532840 kubelet[2469]: E0912 23:55:10.532418 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.532840 kubelet[2469]: W0912 23:55:10.532443 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.532840 kubelet[2469]: E0912 23:55:10.532480 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.533147 kubelet[2469]: E0912 23:55:10.533116 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.533224 kubelet[2469]: W0912 23:55:10.533147 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.533224 kubelet[2469]: E0912 23:55:10.533187 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.533669 kubelet[2469]: E0912 23:55:10.533502 2469 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:55:10.533669 kubelet[2469]: W0912 23:55:10.533590 2469 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:55:10.533669 kubelet[2469]: E0912 23:55:10.533654 2469 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:55:10.784859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22512537.mount: Deactivated successfully. Sep 12 23:55:10.936583 containerd[2018]: time="2025-09-12T23:55:10.936095450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:10.939169 containerd[2018]: time="2025-09-12T23:55:10.939076898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 12 23:55:10.941879 containerd[2018]: time="2025-09-12T23:55:10.941790998Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:10.947596 containerd[2018]: time="2025-09-12T23:55:10.947498774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:10.950713 containerd[2018]: time="2025-09-12T23:55:10.950624426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.336303783s" Sep 12 23:55:10.950860 containerd[2018]: time="2025-09-12T23:55:10.950717030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:55:10.963811 containerd[2018]: time="2025-09-12T23:55:10.963720614Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:55:10.989723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868209923.mount: Deactivated successfully. Sep 12 23:55:10.993092 containerd[2018]: time="2025-09-12T23:55:10.992999354Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82\"" Sep 12 23:55:10.995471 containerd[2018]: time="2025-09-12T23:55:10.994229882Z" level=info msg="StartContainer for \"a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82\"" Sep 12 23:55:11.051875 systemd[1]: Started cri-containerd-a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82.scope - libcontainer container a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82. Sep 12 23:55:11.105705 containerd[2018]: time="2025-09-12T23:55:11.105642755Z" level=info msg="StartContainer for \"a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82\" returns successfully" Sep 12 23:55:11.135816 systemd[1]: cri-containerd-a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82.scope: Deactivated successfully. Sep 12 23:55:11.246495 kubelet[2469]: E0912 23:55:11.246452 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:11.378557 containerd[2018]: time="2025-09-12T23:55:11.378093480Z" level=info msg="shim disconnected" id=a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82 namespace=k8s.io Sep 12 23:55:11.378557 containerd[2018]: time="2025-09-12T23:55:11.378168828Z" level=warning msg="cleaning up after shim disconnected" id=a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82 namespace=k8s.io Sep 12 23:55:11.378557 containerd[2018]: time="2025-09-12T23:55:11.378190320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:11.400037 containerd[2018]: time="2025-09-12T23:55:11.398457888Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:55:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:55:11.459209 containerd[2018]: time="2025-09-12T23:55:11.459144433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:55:11.735091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5064c9198b897523e3eeb490ca8bede98eb20748395074dc5293ab53c7d4a82-rootfs.mount: Deactivated successfully. Sep 12 23:55:12.247387 kubelet[2469]: E0912 23:55:12.247308 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:12.409241 kubelet[2469]: E0912 23:55:12.408773 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:13.247773 kubelet[2469]: E0912 23:55:13.247548 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:14.247877 kubelet[2469]: E0912 23:55:14.247761 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:14.396556 containerd[2018]: time="2025-09-12T23:55:14.394600335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:14.397303 containerd[2018]: time="2025-09-12T23:55:14.396493611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:55:14.397613 containerd[2018]: time="2025-09-12T23:55:14.397559715Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:14.401577 containerd[2018]: time="2025-09-12T23:55:14.401473023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:14.403571 containerd[2018]: time="2025-09-12T23:55:14.403468095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.944248194s" Sep 12 23:55:14.403822 containerd[2018]: time="2025-09-12T23:55:14.403778451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:55:14.409153 containerd[2018]: time="2025-09-12T23:55:14.409081923Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:55:14.409441 kubelet[2469]: E0912 23:55:14.409356 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:14.435310 containerd[2018]: time="2025-09-12T23:55:14.435238035Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8\"" Sep 12 23:55:14.436861 containerd[2018]: time="2025-09-12T23:55:14.436803831Z" level=info msg="StartContainer for \"19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8\"" Sep 12 23:55:14.520162 systemd[1]: run-containerd-runc-k8s.io-19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8-runc.kiU1FQ.mount: Deactivated successfully. Sep 12 23:55:14.534199 systemd[1]: Started cri-containerd-19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8.scope - libcontainer container 19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8. Sep 12 23:55:14.595618 containerd[2018]: time="2025-09-12T23:55:14.595209784Z" level=info msg="StartContainer for \"19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8\" returns successfully" Sep 12 23:55:15.248207 kubelet[2469]: E0912 23:55:15.248080 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:15.642433 containerd[2018]: time="2025-09-12T23:55:15.642265157Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:55:15.647916 systemd[1]: cri-containerd-19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8.scope: Deactivated successfully. Sep 12 23:55:15.650364 systemd[1]: cri-containerd-19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8.scope: Consumed 1.017s CPU time. Sep 12 23:55:15.688585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8-rootfs.mount: Deactivated successfully. Sep 12 23:55:15.710582 kubelet[2469]: I0912 23:55:15.709676 2469 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 23:55:16.237257 containerd[2018]: time="2025-09-12T23:55:16.237165832Z" level=info msg="shim disconnected" id=19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8 namespace=k8s.io Sep 12 23:55:16.237257 containerd[2018]: time="2025-09-12T23:55:16.237248248Z" level=warning msg="cleaning up after shim disconnected" id=19fa6c5cfdef89fa1bec53ef39b85d3dc9c60b7c39f3f9dca2c3fec597f12ed8 namespace=k8s.io Sep 12 23:55:16.237843 containerd[2018]: time="2025-09-12T23:55:16.237273712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:55:16.248435 kubelet[2469]: E0912 23:55:16.248349 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:16.420169 systemd[1]: Created slice kubepods-besteffort-podd02977a6_a9b2_4f00_a209_ffd36b3b9de2.slice - libcontainer container kubepods-besteffort-podd02977a6_a9b2_4f00_a209_ffd36b3b9de2.slice. Sep 12 23:55:16.424956 containerd[2018]: time="2025-09-12T23:55:16.424892213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sm4qz,Uid:d02977a6-a9b2-4f00-a209-ffd36b3b9de2,Namespace:calico-system,Attempt:0,}" Sep 12 23:55:16.523441 containerd[2018]: time="2025-09-12T23:55:16.521973150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:55:16.580827 containerd[2018]: time="2025-09-12T23:55:16.580763850Z" level=error msg="Failed to destroy network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:16.582586 containerd[2018]: time="2025-09-12T23:55:16.582171210Z" level=error msg="encountered an error cleaning up failed sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:16.584718 containerd[2018]: time="2025-09-12T23:55:16.584626314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sm4qz,Uid:d02977a6-a9b2-4f00-a209-ffd36b3b9de2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:16.585233 kubelet[2469]: E0912 23:55:16.585159 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:16.585377 kubelet[2469]: E0912 23:55:16.585264 2469 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:16.585377 kubelet[2469]: E0912 23:55:16.585301 2469 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sm4qz" Sep 12 23:55:16.585505 kubelet[2469]: E0912 23:55:16.585382 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sm4qz_calico-system(d02977a6-a9b2-4f00-a209-ffd36b3b9de2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sm4qz_calico-system(d02977a6-a9b2-4f00-a209-ffd36b3b9de2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:16.585868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370-shm.mount: Deactivated successfully. Sep 12 23:55:17.249203 kubelet[2469]: E0912 23:55:17.249134 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:17.519161 kubelet[2469]: I0912 23:55:17.518094 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:17.520084 containerd[2018]: time="2025-09-12T23:55:17.519969235Z" level=info msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" Sep 12 23:55:17.520786 containerd[2018]: time="2025-09-12T23:55:17.520268527Z" level=info msg="Ensure that sandbox 3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370 in task-service has been cleanup successfully" Sep 12 23:55:17.574804 containerd[2018]: time="2025-09-12T23:55:17.574627471Z" level=error msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" failed" error="failed to destroy network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:17.576119 kubelet[2469]: E0912 23:55:17.575721 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:17.576119 kubelet[2469]: E0912 23:55:17.575817 2469 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370"} Sep 12 23:55:17.576119 kubelet[2469]: E0912 23:55:17.575904 2469 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:55:17.576119 kubelet[2469]: E0912 23:55:17.575984 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d02977a6-a9b2-4f00-a209-ffd36b3b9de2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sm4qz" podUID="d02977a6-a9b2-4f00-a209-ffd36b3b9de2" Sep 12 23:55:18.250104 kubelet[2469]: E0912 23:55:18.249908 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:18.335080 systemd[1]: Created slice kubepods-besteffort-pod2190141f_17c2_4525_8bd3_99903586602a.slice - libcontainer container kubepods-besteffort-pod2190141f_17c2_4525_8bd3_99903586602a.slice. Sep 12 23:55:18.338969 kubelet[2469]: W0912 23:55:18.338712 2469 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.25.8" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.25.8' and this object Sep 12 23:55:18.338969 kubelet[2469]: E0912 23:55:18.338820 2469 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172.31.25.8\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '172.31.25.8' and this object" logger="UnhandledError" Sep 12 23:55:18.484257 kubelet[2469]: I0912 23:55:18.484166 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpj24\" (UniqueName: \"kubernetes.io/projected/2190141f-17c2-4525-8bd3-99903586602a-kube-api-access-xpj24\") pod \"nginx-deployment-8587fbcb89-tjw74\" (UID: \"2190141f-17c2-4525-8bd3-99903586602a\") " pod="default/nginx-deployment-8587fbcb89-tjw74" Sep 12 23:55:19.243810 containerd[2018]: time="2025-09-12T23:55:19.243749071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tjw74,Uid:2190141f-17c2-4525-8bd3-99903586602a,Namespace:default,Attempt:0,}" Sep 12 23:55:19.251039 kubelet[2469]: E0912 23:55:19.250619 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:19.433951 containerd[2018]: time="2025-09-12T23:55:19.433708304Z" level=error msg="Failed to destroy network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:19.434778 containerd[2018]: time="2025-09-12T23:55:19.434348456Z" level=error msg="encountered an error cleaning up failed sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:19.434778 containerd[2018]: time="2025-09-12T23:55:19.434433044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tjw74,Uid:2190141f-17c2-4525-8bd3-99903586602a,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:19.437690 kubelet[2469]: E0912 23:55:19.436918 2469 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:19.437690 kubelet[2469]: E0912 23:55:19.437008 2469 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-tjw74" Sep 12 23:55:19.437690 kubelet[2469]: E0912 23:55:19.437043 2469 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-tjw74" Sep 12 23:55:19.438296 kubelet[2469]: E0912 23:55:19.437133 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-tjw74_default(2190141f-17c2-4525-8bd3-99903586602a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-tjw74_default(2190141f-17c2-4525-8bd3-99903586602a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-tjw74" podUID="2190141f-17c2-4525-8bd3-99903586602a" Sep 12 23:55:19.440047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a-shm.mount: Deactivated successfully. Sep 12 23:55:19.532167 kubelet[2469]: I0912 23:55:19.530978 2469 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:19.534000 containerd[2018]: time="2025-09-12T23:55:19.533597433Z" level=info msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" Sep 12 23:55:19.534156 containerd[2018]: time="2025-09-12T23:55:19.534046173Z" level=info msg="Ensure that sandbox 0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a in task-service has been cleanup successfully" Sep 12 23:55:19.615151 containerd[2018]: time="2025-09-12T23:55:19.614041533Z" level=error msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" failed" error="failed to destroy network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:55:19.615353 kubelet[2469]: E0912 23:55:19.614912 2469 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:19.615353 kubelet[2469]: E0912 23:55:19.614979 2469 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a"} Sep 12 23:55:19.615353 kubelet[2469]: E0912 23:55:19.615042 2469 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2190141f-17c2-4525-8bd3-99903586602a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:55:19.615353 kubelet[2469]: E0912 23:55:19.615084 2469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2190141f-17c2-4525-8bd3-99903586602a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-tjw74" podUID="2190141f-17c2-4525-8bd3-99903586602a" Sep 12 23:55:20.251167 kubelet[2469]: E0912 23:55:20.251116 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:20.632838 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 23:55:21.252327 kubelet[2469]: E0912 23:55:21.252265 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:22.253255 kubelet[2469]: E0912 23:55:22.253128 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:23.254535 kubelet[2469]: E0912 23:55:23.253698 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:23.278389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392261646.mount: Deactivated successfully. Sep 12 23:55:23.327932 containerd[2018]: time="2025-09-12T23:55:23.327870144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:23.329834 containerd[2018]: time="2025-09-12T23:55:23.329635476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:55:23.329834 containerd[2018]: time="2025-09-12T23:55:23.329770608Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:23.333561 containerd[2018]: time="2025-09-12T23:55:23.333314892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:23.335551 containerd[2018]: time="2025-09-12T23:55:23.334748628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.812592778s" Sep 12 23:55:23.335551 containerd[2018]: time="2025-09-12T23:55:23.334809324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:55:23.360384 containerd[2018]: time="2025-09-12T23:55:23.360328308Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:55:23.380104 containerd[2018]: time="2025-09-12T23:55:23.380024580Z" level=info msg="CreateContainer within sandbox \"f4817df2122eaa139dd21645e9ea3bf03a77a9efde0d6e733f7bc7ee62027172\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8\"" Sep 12 23:55:23.381061 containerd[2018]: time="2025-09-12T23:55:23.381011496Z" level=info msg="StartContainer for \"0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8\"" Sep 12 23:55:23.430276 systemd[1]: Started cri-containerd-0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8.scope - libcontainer container 0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8. Sep 12 23:55:23.498639 containerd[2018]: time="2025-09-12T23:55:23.498290052Z" level=info msg="StartContainer for \"0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8\" returns successfully" Sep 12 23:55:23.580497 kubelet[2469]: I0912 23:55:23.579793 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2z5f9" podStartSLOduration=3.911339699 podStartE2EDuration="19.579750649s" podCreationTimestamp="2025-09-12 23:55:04 +0000 UTC" firstStartedPulling="2025-09-12 23:55:07.66827623 +0000 UTC m=+4.282565158" lastFinishedPulling="2025-09-12 23:55:23.336687168 +0000 UTC m=+19.950976108" observedRunningTime="2025-09-12 23:55:23.579225649 +0000 UTC m=+20.193514613" watchObservedRunningTime="2025-09-12 23:55:23.579750649 +0000 UTC m=+20.194039589" Sep 12 23:55:23.761592 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:55:23.761744 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:55:24.238188 kubelet[2469]: E0912 23:55:24.238087 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:24.254779 kubelet[2469]: E0912 23:55:24.254679 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:24.553289 kubelet[2469]: I0912 23:55:24.553243 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:55:25.255542 kubelet[2469]: E0912 23:55:25.255448 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:25.882595 kernel: bpftool[3243]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 23:55:26.204254 (udev-worker)[3095]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:26.211434 systemd-networkd[1930]: vxlan.calico: Link UP Sep 12 23:55:26.211458 systemd-networkd[1930]: vxlan.calico: Gained carrier Sep 12 23:55:26.256382 kubelet[2469]: E0912 23:55:26.256016 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:26.267905 (udev-worker)[3096]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:27.256361 kubelet[2469]: E0912 23:55:27.256287 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:27.695304 systemd-networkd[1930]: vxlan.calico: Gained IPv6LL Sep 12 23:55:28.257746 kubelet[2469]: E0912 23:55:28.257460 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:29.258454 kubelet[2469]: E0912 23:55:29.258387 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:29.730087 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.10.128:123 Sep 12 23:55:29.730954 ntpd[1993]: 12 Sep 23:55:29 ntpd[1993]: Listen normally on 8 vxlan.calico 192.168.10.128:123 Sep 12 23:55:29.730954 ntpd[1993]: 12 Sep 23:55:29 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::64fb:64ff:fedb:35f3%3]:123 Sep 12 23:55:29.730211 ntpd[1993]: Listen normally on 9 vxlan.calico [fe80::64fb:64ff:fedb:35f3%3]:123 Sep 12 23:55:30.259188 kubelet[2469]: E0912 23:55:30.259144 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:31.260858 kubelet[2469]: E0912 23:55:31.260794 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:32.261588 kubelet[2469]: E0912 23:55:32.261485 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:32.411379 containerd[2018]: time="2025-09-12T23:55:32.410910669Z" level=info msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.497 [INFO][3333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.499 [INFO][3333] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" iface="eth0" netns="/var/run/netns/cni-7da152cd-8717-a57b-2213-c798a06e5540" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.499 [INFO][3333] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" iface="eth0" netns="/var/run/netns/cni-7da152cd-8717-a57b-2213-c798a06e5540" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.500 [INFO][3333] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" iface="eth0" netns="/var/run/netns/cni-7da152cd-8717-a57b-2213-c798a06e5540" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.500 [INFO][3333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.500 [INFO][3333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.571 [INFO][3340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.572 [INFO][3340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.573 [INFO][3340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.585 [WARNING][3340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.586 [INFO][3340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.588 [INFO][3340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:32.596722 containerd[2018]: 2025-09-12 23:55:32.593 [INFO][3333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:55:32.599214 containerd[2018]: time="2025-09-12T23:55:32.598665574Z" level=info msg="TearDown network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" successfully" Sep 12 23:55:32.599214 containerd[2018]: time="2025-09-12T23:55:32.598736770Z" level=info msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" returns successfully" Sep 12 23:55:32.600288 containerd[2018]: time="2025-09-12T23:55:32.600223366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sm4qz,Uid:d02977a6-a9b2-4f00-a209-ffd36b3b9de2,Namespace:calico-system,Attempt:1,}" Sep 12 23:55:32.601806 systemd[1]: run-netns-cni\x2d7da152cd\x2d8717\x2da57b\x2d2213\x2dc798a06e5540.mount: Deactivated successfully. Sep 12 23:55:32.807381 systemd-networkd[1930]: calicd945af7968: Link UP Sep 12 23:55:32.807845 systemd-networkd[1930]: calicd945af7968: Gained carrier Sep 12 23:55:32.814415 (udev-worker)[3366]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.694 [INFO][3348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.8-k8s-csi--node--driver--sm4qz-eth0 csi-node-driver- calico-system d02977a6-a9b2-4f00-a209-ffd36b3b9de2 1222 0 2025-09-12 23:55:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.25.8 csi-node-driver-sm4qz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd945af7968 [] [] }} ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.695 [INFO][3348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.738 [INFO][3359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" HandleID="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.738 [INFO][3359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" HandleID="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.25.8", "pod":"csi-node-driver-sm4qz", "timestamp":"2025-09-12 23:55:32.738111958 +0000 UTC"}, Hostname:"172.31.25.8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.738 [INFO][3359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.738 [INFO][3359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.738 [INFO][3359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.8' Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.752 [INFO][3359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.759 [INFO][3359] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.769 [INFO][3359] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.772 [INFO][3359] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.775 [INFO][3359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.775 [INFO][3359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.780 [INFO][3359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5 Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.788 [INFO][3359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.797 [INFO][3359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.10.129/26] block=192.168.10.128/26 handle="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.797 [INFO][3359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.129/26] handle="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" host="172.31.25.8" Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.797 [INFO][3359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:32.838134 containerd[2018]: 2025-09-12 23:55:32.797 [INFO][3359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.129/26] IPv6=[] ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" HandleID="k8s-pod-network.437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.801 [INFO][3348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-csi--node--driver--sm4qz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d02977a6-a9b2-4f00-a209-ffd36b3b9de2", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"", Pod:"csi-node-driver-sm4qz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd945af7968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.801 [INFO][3348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.129/32] ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.801 [INFO][3348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd945af7968 ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.809 [INFO][3348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.810 [INFO][3348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-csi--node--driver--sm4qz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d02977a6-a9b2-4f00-a209-ffd36b3b9de2", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5", Pod:"csi-node-driver-sm4qz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd945af7968", MAC:"e6:af:12:7e:e3:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:32.839286 containerd[2018]: 2025-09-12 23:55:32.831 [INFO][3348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5" Namespace="calico-system" Pod="csi-node-driver-sm4qz" WorkloadEndpoint="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:55:32.874407 containerd[2018]: time="2025-09-12T23:55:32.873185111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:32.874407 containerd[2018]: time="2025-09-12T23:55:32.873272327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:32.874407 containerd[2018]: time="2025-09-12T23:55:32.873299375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:32.874407 containerd[2018]: time="2025-09-12T23:55:32.873446087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:32.923887 systemd[1]: Started cri-containerd-437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5.scope - libcontainer container 437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5. Sep 12 23:55:32.965551 containerd[2018]: time="2025-09-12T23:55:32.965404415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sm4qz,Uid:d02977a6-a9b2-4f00-a209-ffd36b3b9de2,Namespace:calico-system,Attempt:1,} returns sandbox id \"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5\"" Sep 12 23:55:32.968962 containerd[2018]: time="2025-09-12T23:55:32.968683391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:55:33.262397 kubelet[2469]: E0912 23:55:33.262319 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:33.410746 containerd[2018]: time="2025-09-12T23:55:33.410112238Z" level=info msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.485 [INFO][3431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.485 [INFO][3431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" iface="eth0" netns="/var/run/netns/cni-226b8a23-a036-eb61-b5be-1e7682688f8c" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.486 [INFO][3431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" iface="eth0" netns="/var/run/netns/cni-226b8a23-a036-eb61-b5be-1e7682688f8c" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.486 [INFO][3431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" iface="eth0" netns="/var/run/netns/cni-226b8a23-a036-eb61-b5be-1e7682688f8c" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.486 [INFO][3431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.486 [INFO][3431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.521 [INFO][3438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.522 [INFO][3438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.522 [INFO][3438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.540 [WARNING][3438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.540 [INFO][3438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.543 [INFO][3438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:33.549337 containerd[2018]: 2025-09-12 23:55:33.546 [INFO][3431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:55:33.550841 containerd[2018]: time="2025-09-12T23:55:33.550636282Z" level=info msg="TearDown network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" successfully" Sep 12 23:55:33.550841 containerd[2018]: time="2025-09-12T23:55:33.550680322Z" level=info msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" returns successfully" Sep 12 23:55:33.552313 containerd[2018]: time="2025-09-12T23:55:33.551814874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tjw74,Uid:2190141f-17c2-4525-8bd3-99903586602a,Namespace:default,Attempt:1,}" Sep 12 23:55:33.602782 systemd[1]: run-containerd-runc-k8s.io-437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5-runc.xdHhP5.mount: Deactivated successfully. Sep 12 23:55:33.602976 systemd[1]: run-netns-cni\x2d226b8a23\x2da036\x2deb61\x2db5be\x2d1e7682688f8c.mount: Deactivated successfully. Sep 12 23:55:33.789884 systemd-networkd[1930]: cali47e5ea9a46c: Link UP Sep 12 23:55:33.792166 systemd-networkd[1930]: cali47e5ea9a46c: Gained carrier Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.653 [INFO][3446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0 nginx-deployment-8587fbcb89- default 2190141f-17c2-4525-8bd3-99903586602a 1233 0 2025-09-12 23:55:18 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.8 nginx-deployment-8587fbcb89-tjw74 eth0 default [] [] [kns.default ksa.default.default] cali47e5ea9a46c [] [] }} ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.653 [INFO][3446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.700 [INFO][3458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" HandleID="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.700 [INFO][3458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" HandleID="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3650), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.8", "pod":"nginx-deployment-8587fbcb89-tjw74", "timestamp":"2025-09-12 23:55:33.700451351 +0000 UTC"}, Hostname:"172.31.25.8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.700 [INFO][3458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.700 [INFO][3458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.701 [INFO][3458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.8' Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.720 [INFO][3458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.729 [INFO][3458] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.740 [INFO][3458] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.746 [INFO][3458] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.750 [INFO][3458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.750 [INFO][3458] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.753 [INFO][3458] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.762 [INFO][3458] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.780 [INFO][3458] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.10.130/26] block=192.168.10.128/26 handle="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.780 [INFO][3458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.130/26] handle="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" host="172.31.25.8" Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.780 [INFO][3458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:33.820892 containerd[2018]: 2025-09-12 23:55:33.780 [INFO][3458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.130/26] IPv6=[] ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" HandleID="k8s-pod-network.84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.783 [INFO][3446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"2190141f-17c2-4525-8bd3-99903586602a", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-tjw74", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali47e5ea9a46c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.783 [INFO][3446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.130/32] ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.783 [INFO][3446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47e5ea9a46c ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.792 [INFO][3446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.793 [INFO][3446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"2190141f-17c2-4525-8bd3-99903586602a", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d", Pod:"nginx-deployment-8587fbcb89-tjw74", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali47e5ea9a46c", MAC:"b6:46:f0:12:ca:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:33.822014 containerd[2018]: 2025-09-12 23:55:33.814 [INFO][3446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d" Namespace="default" Pod="nginx-deployment-8587fbcb89-tjw74" WorkloadEndpoint="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:55:33.861197 containerd[2018]: time="2025-09-12T23:55:33.861057108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:33.861444 containerd[2018]: time="2025-09-12T23:55:33.861309312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:33.861444 containerd[2018]: time="2025-09-12T23:55:33.861373836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:33.861961 containerd[2018]: time="2025-09-12T23:55:33.861665688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:33.904874 systemd[1]: Started cri-containerd-84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d.scope - libcontainer container 84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d. Sep 12 23:55:33.968405 containerd[2018]: time="2025-09-12T23:55:33.968355516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tjw74,Uid:2190141f-17c2-4525-8bd3-99903586602a,Namespace:default,Attempt:1,} returns sandbox id \"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d\"" Sep 12 23:55:34.263167 kubelet[2469]: E0912 23:55:34.263108 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:34.428693 kubelet[2469]: I0912 23:55:34.426413 2469 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:55:34.737236 systemd-networkd[1930]: calicd945af7968: Gained IPv6LL Sep 12 23:55:34.807319 containerd[2018]: time="2025-09-12T23:55:34.807232633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.810163 containerd[2018]: time="2025-09-12T23:55:34.810098833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:55:34.814884 containerd[2018]: time="2025-09-12T23:55:34.814814185Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.819707 containerd[2018]: time="2025-09-12T23:55:34.819636217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:34.821653 containerd[2018]: time="2025-09-12T23:55:34.821257429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.852521274s" Sep 12 23:55:34.821653 containerd[2018]: time="2025-09-12T23:55:34.821314417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:55:34.824196 containerd[2018]: time="2025-09-12T23:55:34.823854541Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 12 23:55:34.825140 containerd[2018]: time="2025-09-12T23:55:34.825079741Z" level=info msg="CreateContainer within sandbox \"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:55:34.860429 containerd[2018]: time="2025-09-12T23:55:34.860342605Z" level=info msg="CreateContainer within sandbox \"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9adf94919cb45b26880a6221a1d9331a9a7107f94dc8340457fc48d0580f720f\"" Sep 12 23:55:34.861385 containerd[2018]: time="2025-09-12T23:55:34.861332077Z" level=info msg="StartContainer for \"9adf94919cb45b26880a6221a1d9331a9a7107f94dc8340457fc48d0580f720f\"" Sep 12 23:55:34.923827 systemd[1]: Started cri-containerd-9adf94919cb45b26880a6221a1d9331a9a7107f94dc8340457fc48d0580f720f.scope - libcontainer container 9adf94919cb45b26880a6221a1d9331a9a7107f94dc8340457fc48d0580f720f. Sep 12 23:55:34.953562 update_engine[2000]: I20250912 23:55:34.952641 2000 update_attempter.cc:509] Updating boot flags... Sep 12 23:55:34.991277 containerd[2018]: time="2025-09-12T23:55:34.990668930Z" level=info msg="StartContainer for \"9adf94919cb45b26880a6221a1d9331a9a7107f94dc8340457fc48d0580f720f\" returns successfully" Sep 12 23:55:35.040164 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3608) Sep 12 23:55:35.265857 kubelet[2469]: E0912 23:55:35.265534 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:35.330212 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3610) Sep 12 23:55:35.508829 systemd-networkd[1930]: cali47e5ea9a46c: Gained IPv6LL Sep 12 23:55:36.265884 kubelet[2469]: E0912 23:55:36.265834 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:37.267069 kubelet[2469]: E0912 23:55:37.267008 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:37.730317 ntpd[1993]: Listen normally on 10 calicd945af7968 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:55:37.731590 ntpd[1993]: 12 Sep 23:55:37 ntpd[1993]: Listen normally on 10 calicd945af7968 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:55:37.731590 ntpd[1993]: 12 Sep 23:55:37 ntpd[1993]: Listen normally on 11 cali47e5ea9a46c [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 23:55:37.730434 ntpd[1993]: Listen normally on 11 cali47e5ea9a46c [fe80::ecee:eeff:feee:eeee%7]:123 Sep 12 23:55:38.253962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610206425.mount: Deactivated successfully. Sep 12 23:55:38.267978 kubelet[2469]: E0912 23:55:38.267901 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:39.268710 kubelet[2469]: E0912 23:55:39.268641 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:39.858935 containerd[2018]: time="2025-09-12T23:55:39.858844530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:39.861321 containerd[2018]: time="2025-09-12T23:55:39.861180462Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69986522" Sep 12 23:55:39.864565 containerd[2018]: time="2025-09-12T23:55:39.863596722Z" level=info msg="ImageCreate event name:\"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:39.873007 containerd[2018]: time="2025-09-12T23:55:39.871996674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:39.874561 containerd[2018]: time="2025-09-12T23:55:39.874442958Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 5.050530241s" Sep 12 23:55:39.874561 containerd[2018]: time="2025-09-12T23:55:39.874546818Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 12 23:55:39.879248 containerd[2018]: time="2025-09-12T23:55:39.879161694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:55:39.880886 containerd[2018]: time="2025-09-12T23:55:39.880811262Z" level=info msg="CreateContainer within sandbox \"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 12 23:55:39.914705 containerd[2018]: time="2025-09-12T23:55:39.914462274Z" level=info msg="CreateContainer within sandbox \"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"955fb7ce51f34724ae889734bb59ce5b536fc44fe8a61b15ef7233a30ed7ab3c\"" Sep 12 23:55:39.916566 containerd[2018]: time="2025-09-12T23:55:39.916298910Z" level=info msg="StartContainer for \"955fb7ce51f34724ae889734bb59ce5b536fc44fe8a61b15ef7233a30ed7ab3c\"" Sep 12 23:55:39.978908 systemd[1]: Started cri-containerd-955fb7ce51f34724ae889734bb59ce5b536fc44fe8a61b15ef7233a30ed7ab3c.scope - libcontainer container 955fb7ce51f34724ae889734bb59ce5b536fc44fe8a61b15ef7233a30ed7ab3c. Sep 12 23:55:40.029467 containerd[2018]: time="2025-09-12T23:55:40.029077683Z" level=info msg="StartContainer for \"955fb7ce51f34724ae889734bb59ce5b536fc44fe8a61b15ef7233a30ed7ab3c\" returns successfully" Sep 12 23:55:40.269668 kubelet[2469]: E0912 23:55:40.269586 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:41.270015 kubelet[2469]: E0912 23:55:41.269944 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:41.311091 containerd[2018]: time="2025-09-12T23:55:41.311011385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:41.312582 containerd[2018]: time="2025-09-12T23:55:41.312482525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:55:41.313426 containerd[2018]: time="2025-09-12T23:55:41.313358093Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:41.317609 containerd[2018]: time="2025-09-12T23:55:41.317473649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:41.319319 containerd[2018]: time="2025-09-12T23:55:41.319076825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.439575015s" Sep 12 23:55:41.319319 containerd[2018]: time="2025-09-12T23:55:41.319139873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:55:41.323836 containerd[2018]: time="2025-09-12T23:55:41.323776721Z" level=info msg="CreateContainer within sandbox \"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:55:41.345638 containerd[2018]: time="2025-09-12T23:55:41.345495893Z" level=info msg="CreateContainer within sandbox \"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0201ffed28812f11a0bf8b7838ef35739a9003a014fcb9f20c55104377189740\"" Sep 12 23:55:41.346805 containerd[2018]: time="2025-09-12T23:55:41.346749617Z" level=info msg="StartContainer for \"0201ffed28812f11a0bf8b7838ef35739a9003a014fcb9f20c55104377189740\"" Sep 12 23:55:41.415871 systemd[1]: Started cri-containerd-0201ffed28812f11a0bf8b7838ef35739a9003a014fcb9f20c55104377189740.scope - libcontainer container 0201ffed28812f11a0bf8b7838ef35739a9003a014fcb9f20c55104377189740. Sep 12 23:55:41.474753 containerd[2018]: time="2025-09-12T23:55:41.474628986Z" level=info msg="StartContainer for \"0201ffed28812f11a0bf8b7838ef35739a9003a014fcb9f20c55104377189740\" returns successfully" Sep 12 23:55:41.686894 kubelet[2469]: I0912 23:55:41.686681 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sm4qz" podStartSLOduration=29.333513321 podStartE2EDuration="37.686658343s" podCreationTimestamp="2025-09-12 23:55:04 +0000 UTC" firstStartedPulling="2025-09-12 23:55:32.967781183 +0000 UTC m=+29.582070123" lastFinishedPulling="2025-09-12 23:55:41.320926205 +0000 UTC m=+37.935215145" observedRunningTime="2025-09-12 23:55:41.686655991 +0000 UTC m=+38.300944967" watchObservedRunningTime="2025-09-12 23:55:41.686658343 +0000 UTC m=+38.300947367" Sep 12 23:55:41.687105 kubelet[2469]: I0912 23:55:41.686941 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-tjw74" podStartSLOduration=17.780283129 podStartE2EDuration="23.686925607s" podCreationTimestamp="2025-09-12 23:55:18 +0000 UTC" firstStartedPulling="2025-09-12 23:55:33.970954908 +0000 UTC m=+30.585243836" lastFinishedPulling="2025-09-12 23:55:39.877597386 +0000 UTC m=+36.491886314" observedRunningTime="2025-09-12 23:55:40.668988762 +0000 UTC m=+37.283277726" watchObservedRunningTime="2025-09-12 23:55:41.686925607 +0000 UTC m=+38.301214547" Sep 12 23:55:42.270204 kubelet[2469]: E0912 23:55:42.270131 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:42.453204 kubelet[2469]: I0912 23:55:42.453151 2469 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:55:42.453204 kubelet[2469]: I0912 23:55:42.453201 2469 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:55:43.270391 kubelet[2469]: E0912 23:55:43.270315 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:44.238723 kubelet[2469]: E0912 23:55:44.238661 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:44.271479 kubelet[2469]: E0912 23:55:44.271430 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:45.272166 kubelet[2469]: E0912 23:55:45.272103 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:46.272897 kubelet[2469]: E0912 23:55:46.272831 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:47.273907 kubelet[2469]: E0912 23:55:47.273834 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:48.275084 kubelet[2469]: E0912 23:55:48.275004 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:48.393767 systemd[1]: Created slice kubepods-besteffort-podace03908_0082_4816_b69c_7ad346bf2261.slice - libcontainer container kubepods-besteffort-podace03908_0082_4816_b69c_7ad346bf2261.slice. Sep 12 23:55:48.477946 kubelet[2469]: I0912 23:55:48.477832 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mlnt\" (UniqueName: \"kubernetes.io/projected/ace03908-0082-4816-b69c-7ad346bf2261-kube-api-access-8mlnt\") pod \"nfs-server-provisioner-0\" (UID: \"ace03908-0082-4816-b69c-7ad346bf2261\") " pod="default/nfs-server-provisioner-0" Sep 12 23:55:48.477946 kubelet[2469]: I0912 23:55:48.477913 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ace03908-0082-4816-b69c-7ad346bf2261-data\") pod \"nfs-server-provisioner-0\" (UID: \"ace03908-0082-4816-b69c-7ad346bf2261\") " pod="default/nfs-server-provisioner-0" Sep 12 23:55:48.699991 containerd[2018]: time="2025-09-12T23:55:48.699852986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ace03908-0082-4816-b69c-7ad346bf2261,Namespace:default,Attempt:0,}" Sep 12 23:55:48.951672 systemd-networkd[1930]: cali60e51b789ff: Link UP Sep 12 23:55:48.952095 systemd-networkd[1930]: cali60e51b789ff: Gained carrier Sep 12 23:55:48.957294 (udev-worker)[3947]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.803 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.8-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ace03908-0082-4816-b69c-7ad346bf2261 1341 0 2025-09-12 23:55:48 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.25.8 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.803 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.856 [INFO][3940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" HandleID="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Workload="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.856 [INFO][3940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" HandleID="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Workload="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1610), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.8", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-12 23:55:48.856415366 +0000 UTC"}, Hostname:"172.31.25.8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.856 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.856 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.856 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.8' Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.870 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.881 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.891 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.895 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.900 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.900 [INFO][3940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.902 [INFO][3940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.911 [INFO][3940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.942 [INFO][3940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.10.131/26] block=192.168.10.128/26 handle="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.942 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.131/26] handle="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" host="172.31.25.8" Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.942 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:48.977290 containerd[2018]: 2025-09-12 23:55:48.942 [INFO][3940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.131/26] IPv6=[] ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" HandleID="k8s-pod-network.5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Workload="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.981003 containerd[2018]: 2025-09-12 23:55:48.945 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ace03908-0082-4816-b69c-7ad346bf2261", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:48.981003 containerd[2018]: 2025-09-12 23:55:48.945 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.131/32] ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.981003 containerd[2018]: 2025-09-12 23:55:48.945 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.981003 containerd[2018]: 2025-09-12 23:55:48.951 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:48.981385 containerd[2018]: 2025-09-12 23:55:48.953 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ace03908-0082-4816-b69c-7ad346bf2261", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"92:70:1a:10:4a:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:48.981385 containerd[2018]: 2025-09-12 23:55:48.973 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.25.8-k8s-nfs--server--provisioner--0-eth0" Sep 12 23:55:49.020728 containerd[2018]: time="2025-09-12T23:55:49.020545583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:49.020728 containerd[2018]: time="2025-09-12T23:55:49.020675135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:49.021091 containerd[2018]: time="2025-09-12T23:55:49.020705711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:49.021091 containerd[2018]: time="2025-09-12T23:55:49.020880731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:49.065851 systemd[1]: Started cri-containerd-5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c.scope - libcontainer container 5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c. Sep 12 23:55:49.125303 containerd[2018]: time="2025-09-12T23:55:49.125155764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ace03908-0082-4816-b69c-7ad346bf2261,Namespace:default,Attempt:0,} returns sandbox id \"5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c\"" Sep 12 23:55:49.127890 containerd[2018]: time="2025-09-12T23:55:49.127668108Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 12 23:55:49.275489 kubelet[2469]: E0912 23:55:49.275426 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:50.276340 kubelet[2469]: E0912 23:55:50.276273 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:50.735689 systemd-networkd[1930]: cali60e51b789ff: Gained IPv6LL Sep 12 23:55:51.278706 kubelet[2469]: E0912 23:55:51.278637 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:51.675704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327537115.mount: Deactivated successfully. Sep 12 23:55:52.279802 kubelet[2469]: E0912 23:55:52.279618 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:53.280212 kubelet[2469]: E0912 23:55:53.280154 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:53.730326 ntpd[1993]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 23:55:53.731279 ntpd[1993]: 12 Sep 23:55:53 ntpd[1993]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Sep 12 23:55:54.281282 kubelet[2469]: E0912 23:55:54.281200 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:54.902011 containerd[2018]: time="2025-09-12T23:55:54.901927028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:54.904093 containerd[2018]: time="2025-09-12T23:55:54.904003352Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Sep 12 23:55:54.906635 containerd[2018]: time="2025-09-12T23:55:54.906505508Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:54.914940 containerd[2018]: time="2025-09-12T23:55:54.914655921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:54.914940 containerd[2018]: time="2025-09-12T23:55:54.914926221Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.787199937s" Sep 12 23:55:54.915463 containerd[2018]: time="2025-09-12T23:55:54.914979165Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 12 23:55:54.921555 containerd[2018]: time="2025-09-12T23:55:54.921464433Z" level=info msg="CreateContainer within sandbox \"5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 12 23:55:54.950057 containerd[2018]: time="2025-09-12T23:55:54.949970289Z" level=info msg="CreateContainer within sandbox \"5bb4f97ea0809fdc881ad79a8863d6db122dfa86839b4ae99858b18d0a25362c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f6dec1a6b95cea28f549343a1ecf36f569aeb42b05724bf64de986fe0effc039\"" Sep 12 23:55:54.951452 containerd[2018]: time="2025-09-12T23:55:54.951403245Z" level=info msg="StartContainer for \"f6dec1a6b95cea28f549343a1ecf36f569aeb42b05724bf64de986fe0effc039\"" Sep 12 23:55:55.013869 systemd[1]: Started cri-containerd-f6dec1a6b95cea28f549343a1ecf36f569aeb42b05724bf64de986fe0effc039.scope - libcontainer container f6dec1a6b95cea28f549343a1ecf36f569aeb42b05724bf64de986fe0effc039. Sep 12 23:55:55.063257 containerd[2018]: time="2025-09-12T23:55:55.062715665Z" level=info msg="StartContainer for \"f6dec1a6b95cea28f549343a1ecf36f569aeb42b05724bf64de986fe0effc039\" returns successfully" Sep 12 23:55:55.281801 kubelet[2469]: E0912 23:55:55.281718 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:56.282439 kubelet[2469]: E0912 23:55:56.282374 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:57.282929 kubelet[2469]: E0912 23:55:57.282870 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:58.283324 kubelet[2469]: E0912 23:55:58.283262 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:55:59.283656 kubelet[2469]: E0912 23:55:59.283592 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:00.284362 kubelet[2469]: E0912 23:56:00.284285 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:01.284730 kubelet[2469]: E0912 23:56:01.284656 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:02.285375 kubelet[2469]: E0912 23:56:02.285303 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:03.285543 kubelet[2469]: E0912 23:56:03.285448 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:04.238403 kubelet[2469]: E0912 23:56:04.238333 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:04.286044 kubelet[2469]: E0912 23:56:04.285973 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:04.293625 containerd[2018]: time="2025-09-12T23:56:04.292929291Z" level=info msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.355 [WARNING][4100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-csi--node--driver--sm4qz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d02977a6-a9b2-4f00-a209-ffd36b3b9de2", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5", Pod:"csi-node-driver-sm4qz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd945af7968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.357 [INFO][4100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.357 [INFO][4100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" iface="eth0" netns="" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.357 [INFO][4100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.357 [INFO][4100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.393 [INFO][4107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.393 [INFO][4107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.394 [INFO][4107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.407 [WARNING][4107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.407 [INFO][4107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.409 [INFO][4107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:04.418144 containerd[2018]: 2025-09-12 23:56:04.415 [INFO][4100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.419030 containerd[2018]: time="2025-09-12T23:56:04.418193668Z" level=info msg="TearDown network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" successfully" Sep 12 23:56:04.419030 containerd[2018]: time="2025-09-12T23:56:04.418233268Z" level=info msg="StopPodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" returns successfully" Sep 12 23:56:04.419462 containerd[2018]: time="2025-09-12T23:56:04.419397328Z" level=info msg="RemovePodSandbox for \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" Sep 12 23:56:04.419613 containerd[2018]: time="2025-09-12T23:56:04.419477692Z" level=info msg="Forcibly stopping sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\"" Sep 12 23:56:04.483254 systemd[1]: run-containerd-runc-k8s.io-0d8c4a4020cc85ed09cb80c54cfc7e181cde00dd7d3f97fe5bb974c80bb409b8-runc.H583MR.mount: Deactivated successfully. Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.520 [WARNING][4123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-csi--node--driver--sm4qz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d02977a6-a9b2-4f00-a209-ffd36b3b9de2", ResourceVersion:"1299", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"437b5c5a012ef92f6642e3fab3b8f08e44f2607bbfdf5747a73d239881b342f5", Pod:"csi-node-driver-sm4qz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd945af7968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.520 [INFO][4123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.520 [INFO][4123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" iface="eth0" netns="" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.520 [INFO][4123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.520 [INFO][4123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.566 [INFO][4152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.566 [INFO][4152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.567 [INFO][4152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.581 [WARNING][4152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.581 [INFO][4152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" HandleID="k8s-pod-network.3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Workload="172.31.25.8-k8s-csi--node--driver--sm4qz-eth0" Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.597 [INFO][4152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:04.605190 containerd[2018]: 2025-09-12 23:56:04.601 [INFO][4123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370" Sep 12 23:56:04.607286 containerd[2018]: time="2025-09-12T23:56:04.605255417Z" level=info msg="TearDown network for sandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" successfully" Sep 12 23:56:04.611967 containerd[2018]: time="2025-09-12T23:56:04.611897921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:04.612127 containerd[2018]: time="2025-09-12T23:56:04.611990129Z" level=info msg="RemovePodSandbox \"3f09cb0dfb761e21e1252871b9504c58802e944d6a2e0cdf055d8115188eb370\" returns successfully" Sep 12 23:56:04.613819 containerd[2018]: time="2025-09-12T23:56:04.613358309Z" level=info msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.696 [WARNING][4169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"2190141f-17c2-4525-8bd3-99903586602a", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d", Pod:"nginx-deployment-8587fbcb89-tjw74", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali47e5ea9a46c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.697 [INFO][4169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.697 [INFO][4169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" iface="eth0" netns="" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.697 [INFO][4169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.697 [INFO][4169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.740 [INFO][4176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.740 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.740 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.754 [WARNING][4176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.754 [INFO][4176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.757 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:04.761950 containerd[2018]: 2025-09-12 23:56:04.759 [INFO][4169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.763214 containerd[2018]: time="2025-09-12T23:56:04.762984317Z" level=info msg="TearDown network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" successfully" Sep 12 23:56:04.763214 containerd[2018]: time="2025-09-12T23:56:04.763044689Z" level=info msg="StopPodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" returns successfully" Sep 12 23:56:04.763988 containerd[2018]: time="2025-09-12T23:56:04.763912877Z" level=info msg="RemovePodSandbox for \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" Sep 12 23:56:04.764104 containerd[2018]: time="2025-09-12T23:56:04.763986161Z" level=info msg="Forcibly stopping sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\"" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.839 [WARNING][4190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"2190141f-17c2-4525-8bd3-99903586602a", ResourceVersion:"1290", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"84b22734f00ee9930edb01a4e397193f3a1b7196ce8d0cc931f01c5ae086077d", Pod:"nginx-deployment-8587fbcb89-tjw74", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali47e5ea9a46c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.840 [INFO][4190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.840 [INFO][4190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" iface="eth0" netns="" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.840 [INFO][4190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.840 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.875 [INFO][4197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.875 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.876 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.888 [WARNING][4197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.888 [INFO][4197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" HandleID="k8s-pod-network.0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Workload="172.31.25.8-k8s-nginx--deployment--8587fbcb89--tjw74-eth0" Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.891 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:04.896350 containerd[2018]: 2025-09-12 23:56:04.893 [INFO][4190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a" Sep 12 23:56:04.896350 containerd[2018]: time="2025-09-12T23:56:04.896316378Z" level=info msg="TearDown network for sandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" successfully" Sep 12 23:56:04.902262 containerd[2018]: time="2025-09-12T23:56:04.902147430Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:04.902262 containerd[2018]: time="2025-09-12T23:56:04.902250222Z" level=info msg="RemovePodSandbox \"0f326ba7ea352785ac22756062c4fa2697a72e0a3098dadef93bb9d8e228999a\" returns successfully" Sep 12 23:56:05.286868 kubelet[2469]: E0912 23:56:05.286805 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:06.287744 kubelet[2469]: E0912 23:56:06.287682 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:07.288941 kubelet[2469]: E0912 23:56:07.288871 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:08.289656 kubelet[2469]: E0912 23:56:08.289574 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:09.290008 kubelet[2469]: E0912 23:56:09.289942 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:10.290343 kubelet[2469]: E0912 23:56:10.290274 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:11.290713 kubelet[2469]: E0912 23:56:11.290634 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:12.291394 kubelet[2469]: E0912 23:56:12.291316 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:13.292610 kubelet[2469]: E0912 23:56:13.292547 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:14.293156 kubelet[2469]: E0912 23:56:14.293074 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:15.294325 kubelet[2469]: E0912 23:56:15.294198 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:16.295113 kubelet[2469]: E0912 23:56:16.294738 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:17.295645 kubelet[2469]: E0912 23:56:17.295587 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:18.296853 kubelet[2469]: E0912 23:56:18.296775 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:19.297050 kubelet[2469]: E0912 23:56:19.296968 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:19.645448 kubelet[2469]: I0912 23:56:19.645220 2469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=25.853295186 podStartE2EDuration="31.645199507s" podCreationTimestamp="2025-09-12 23:55:48 +0000 UTC" firstStartedPulling="2025-09-12 23:55:49.126958308 +0000 UTC m=+45.741247272" lastFinishedPulling="2025-09-12 23:55:54.918862653 +0000 UTC m=+51.533151593" observedRunningTime="2025-09-12 23:55:55.708589112 +0000 UTC m=+52.322878076" watchObservedRunningTime="2025-09-12 23:56:19.645199507 +0000 UTC m=+76.259488447" Sep 12 23:56:19.656822 systemd[1]: Created slice kubepods-besteffort-poda1f036f5_4958_4b3e_b822_8b6b45312f0e.slice - libcontainer container kubepods-besteffort-poda1f036f5_4958_4b3e_b822_8b6b45312f0e.slice. Sep 12 23:56:19.843164 kubelet[2469]: I0912 23:56:19.843005 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d94301e-0ee9-4494-b2f9-91c29cf19d11\" (UniqueName: \"kubernetes.io/nfs/a1f036f5-4958-4b3e-b822-8b6b45312f0e-pvc-5d94301e-0ee9-4494-b2f9-91c29cf19d11\") pod \"test-pod-1\" (UID: \"a1f036f5-4958-4b3e-b822-8b6b45312f0e\") " pod="default/test-pod-1" Sep 12 23:56:19.843164 kubelet[2469]: I0912 23:56:19.843093 2469 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2jj\" (UniqueName: \"kubernetes.io/projected/a1f036f5-4958-4b3e-b822-8b6b45312f0e-kube-api-access-gd2jj\") pod \"test-pod-1\" (UID: \"a1f036f5-4958-4b3e-b822-8b6b45312f0e\") " pod="default/test-pod-1" Sep 12 23:56:19.981020 kernel: FS-Cache: Loaded Sep 12 23:56:20.028647 kernel: RPC: Registered named UNIX socket transport module. Sep 12 23:56:20.028780 kernel: RPC: Registered udp transport module. Sep 12 23:56:20.028829 kernel: RPC: Registered tcp transport module. Sep 12 23:56:20.028872 kernel: RPC: Registered tcp-with-tls transport module. Sep 12 23:56:20.029700 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 12 23:56:20.297560 kubelet[2469]: E0912 23:56:20.297362 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:20.354966 kernel: NFS: Registering the id_resolver key type Sep 12 23:56:20.355122 kernel: Key type id_resolver registered Sep 12 23:56:20.355160 kernel: Key type id_legacy registered Sep 12 23:56:20.394212 nfsidmap[4244]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 12 23:56:20.401121 nfsidmap[4245]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 12 23:56:20.564461 containerd[2018]: time="2025-09-12T23:56:20.563615300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a1f036f5-4958-4b3e-b822-8b6b45312f0e,Namespace:default,Attempt:0,}" Sep 12 23:56:20.773271 (udev-worker)[4230]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:56:20.776381 systemd-networkd[1930]: cali5ec59c6bf6e: Link UP Sep 12 23:56:20.779240 systemd-networkd[1930]: cali5ec59c6bf6e: Gained carrier Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.656 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.25.8-k8s-test--pod--1-eth0 default a1f036f5-4958-4b3e-b822-8b6b45312f0e 1450 0 2025-09-12 23:55:49 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.25.8 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.656 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.704 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" HandleID="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Workload="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.704 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" HandleID="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Workload="172.31.25.8-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.25.8", "pod":"test-pod-1", "timestamp":"2025-09-12 23:56:20.704386293 +0000 UTC"}, Hostname:"172.31.25.8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.704 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.705 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.705 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.25.8' Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.721 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.730 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.737 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.740 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.743 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.744 [INFO][4258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.746 [INFO][4258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.753 [INFO][4258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.765 [INFO][4258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.10.132/26] block=192.168.10.128/26 handle="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.766 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.132/26] handle="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" host="172.31.25.8" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.766 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.766 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.10.132/26] IPv6=[] ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" HandleID="k8s-pod-network.3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Workload="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.801787 containerd[2018]: 2025-09-12 23:56:20.769 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a1f036f5-4958-4b3e-b822-8b6b45312f0e", ResourceVersion:"1450", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:20.803179 containerd[2018]: 2025-09-12 23:56:20.769 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.132/32] ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.803179 containerd[2018]: 2025-09-12 23:56:20.769 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.803179 containerd[2018]: 2025-09-12 23:56:20.781 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.803179 containerd[2018]: 2025-09-12 23:56:20.781 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.25.8-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a1f036f5-4958-4b3e-b822-8b6b45312f0e", ResourceVersion:"1450", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 55, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.25.8", ContainerID:"3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:f3:ff:ce:53:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:20.803179 containerd[2018]: 2025-09-12 23:56:20.795 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.25.8-k8s-test--pod--1-eth0" Sep 12 23:56:20.843785 containerd[2018]: time="2025-09-12T23:56:20.843464241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:56:20.845502 containerd[2018]: time="2025-09-12T23:56:20.845153457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:56:20.845502 containerd[2018]: time="2025-09-12T23:56:20.845217177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:20.846933 containerd[2018]: time="2025-09-12T23:56:20.845425713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:56:20.878924 systemd[1]: Started cri-containerd-3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d.scope - libcontainer container 3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d. Sep 12 23:56:20.956161 containerd[2018]: time="2025-09-12T23:56:20.956012062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a1f036f5-4958-4b3e-b822-8b6b45312f0e,Namespace:default,Attempt:0,} returns sandbox id \"3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d\"" Sep 12 23:56:20.959956 containerd[2018]: time="2025-09-12T23:56:20.959505250Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 12 23:56:21.269584 containerd[2018]: time="2025-09-12T23:56:21.269186611Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:56:21.272177 containerd[2018]: time="2025-09-12T23:56:21.271377079Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 12 23:56:21.278036 containerd[2018]: time="2025-09-12T23:56:21.277949767Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 318.075433ms" Sep 12 23:56:21.278036 containerd[2018]: time="2025-09-12T23:56:21.278020195Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 12 23:56:21.281927 containerd[2018]: time="2025-09-12T23:56:21.281857795Z" level=info msg="CreateContainer within sandbox \"3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 12 23:56:21.298392 kubelet[2469]: E0912 23:56:21.298322 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:21.313182 containerd[2018]: time="2025-09-12T23:56:21.313117256Z" level=info msg="CreateContainer within sandbox \"3efe9e36667efc0aa7608f0ba4c510bcc25299dcd3afca4973a82cebe43c0d1d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0ceeabeea6d38ed9fdbdd4aa262c99facc0a9f1fcc09ae9b3720717a7ba55abf\"" Sep 12 23:56:21.314605 containerd[2018]: time="2025-09-12T23:56:21.314550536Z" level=info msg="StartContainer for \"0ceeabeea6d38ed9fdbdd4aa262c99facc0a9f1fcc09ae9b3720717a7ba55abf\"" Sep 12 23:56:21.377876 systemd[1]: Started cri-containerd-0ceeabeea6d38ed9fdbdd4aa262c99facc0a9f1fcc09ae9b3720717a7ba55abf.scope - libcontainer container 0ceeabeea6d38ed9fdbdd4aa262c99facc0a9f1fcc09ae9b3720717a7ba55abf. Sep 12 23:56:21.451315 containerd[2018]: time="2025-09-12T23:56:21.451227920Z" level=info msg="StartContainer for \"0ceeabeea6d38ed9fdbdd4aa262c99facc0a9f1fcc09ae9b3720717a7ba55abf\" returns successfully" Sep 12 23:56:22.298773 kubelet[2469]: E0912 23:56:22.298703 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:22.350933 systemd-networkd[1930]: cali5ec59c6bf6e: Gained IPv6LL Sep 12 23:56:23.299336 kubelet[2469]: E0912 23:56:23.299236 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:24.238995 kubelet[2469]: E0912 23:56:24.238920 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:24.300432 kubelet[2469]: E0912 23:56:24.300365 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:24.730187 ntpd[1993]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 23:56:24.732254 ntpd[1993]: 12 Sep 23:56:24 ntpd[1993]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 12 23:56:25.301546 kubelet[2469]: E0912 23:56:25.301462 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:26.302481 kubelet[2469]: E0912 23:56:26.302417 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:27.302675 kubelet[2469]: E0912 23:56:27.302614 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:28.303212 kubelet[2469]: E0912 23:56:28.303147 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:29.303859 kubelet[2469]: E0912 23:56:29.303793 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:30.304692 kubelet[2469]: E0912 23:56:30.304615 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:31.305404 kubelet[2469]: E0912 23:56:31.305340 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:32.305820 kubelet[2469]: E0912 23:56:32.305761 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:33.306755 kubelet[2469]: E0912 23:56:33.306689 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:34.307666 kubelet[2469]: E0912 23:56:34.307606 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:35.308236 kubelet[2469]: E0912 23:56:35.308167 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:36.308644 kubelet[2469]: E0912 23:56:36.308573 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:37.309642 kubelet[2469]: E0912 23:56:37.309562 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:38.310604 kubelet[2469]: E0912 23:56:38.310530 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:39.311377 kubelet[2469]: E0912 23:56:39.311311 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:40.311885 kubelet[2469]: E0912 23:56:40.311817 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:41.312818 kubelet[2469]: E0912 23:56:41.312753 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:42.312964 kubelet[2469]: E0912 23:56:42.312903 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:43.314103 kubelet[2469]: E0912 23:56:43.314038 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:44.238839 kubelet[2469]: E0912 23:56:44.238775 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:44.314533 kubelet[2469]: E0912 23:56:44.314460 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:45.315587 kubelet[2469]: E0912 23:56:45.315497 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:46.316444 kubelet[2469]: E0912 23:56:46.316376 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:46.384071 kubelet[2469]: E0912 23:56:46.383930 2469 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 23:56:47.317200 kubelet[2469]: E0912 23:56:47.317135 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:48.317535 kubelet[2469]: E0912 23:56:48.317453 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:49.318481 kubelet[2469]: E0912 23:56:49.318415 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:50.319377 kubelet[2469]: E0912 23:56:50.319314 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:51.320296 kubelet[2469]: E0912 23:56:51.320231 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:52.320994 kubelet[2469]: E0912 23:56:52.320926 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:53.321791 kubelet[2469]: E0912 23:56:53.321716 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:54.322428 kubelet[2469]: E0912 23:56:54.322360 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:55.323033 kubelet[2469]: E0912 23:56:55.322922 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:56.323586 kubelet[2469]: E0912 23:56:56.323506 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:56.384571 kubelet[2469]: E0912 23:56:56.384426 2469 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 23:56:57.324646 kubelet[2469]: E0912 23:56:57.324579 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:58.325432 kubelet[2469]: E0912 23:56:58.325370 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:56:59.326144 kubelet[2469]: E0912 23:56:59.326066 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:00.326325 kubelet[2469]: E0912 23:57:00.326257 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:01.326687 kubelet[2469]: E0912 23:57:01.326618 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:02.327064 kubelet[2469]: E0912 23:57:02.326986 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:03.327206 kubelet[2469]: E0912 23:57:03.327112 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:04.238450 kubelet[2469]: E0912 23:57:04.238389 2469 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:04.327842 kubelet[2469]: E0912 23:57:04.327780 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:05.328819 kubelet[2469]: E0912 23:57:05.328754 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:06.329541 kubelet[2469]: E0912 23:57:06.329432 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:06.386046 kubelet[2469]: E0912 23:57:06.385533 2469 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 23:57:07.330479 kubelet[2469]: E0912 23:57:07.330404 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:08.331667 kubelet[2469]: E0912 23:57:08.331597 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:09.332481 kubelet[2469]: E0912 23:57:09.332411 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:10.332884 kubelet[2469]: E0912 23:57:10.332809 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:11.333488 kubelet[2469]: E0912 23:57:11.333415 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:11.660833 kubelet[2469]: E0912 23:57:11.658301 2469 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": unexpected EOF" Sep 12 23:57:11.664425 kubelet[2469]: E0912 23:57:11.658568 2469 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.118:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Sep 12 23:57:11.664425 kubelet[2469]: &Event{ObjectMeta:{calico-node-2z5f9.1864ae52a55a0eb6 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-2z5f9,UID:54914f00-c291-4c15-b4a9-efa3d9f17293,APIVersion:v1,ResourceVersion:973,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-09-12 23:57:04.559 [INFO][344] node/health.go 202: Number of node(s) with BGP peering established = 0 Sep 12 23:57:11.664425 kubelet[2469]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.25.118 Sep 12 23:57:11.664425 kubelet[2469]: ,Source:EventSource{Component:kubelet,Host:172.31.25.8,},FirstTimestamp:2025-09-12 23:57:04.566030006 +0000 UTC m=+121.180318970,LastTimestamp:2025-09-12 23:57:04.566030006 +0000 UTC m=+121.180318970,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.8,} Sep 12 23:57:11.664425 kubelet[2469]: > Sep 12 23:57:11.667432 kubelet[2469]: E0912 23:57:11.667338 2469 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": read tcp 172.31.25.8:44448->172.31.25.118:6443: read: connection reset by peer" Sep 12 23:57:11.667432 kubelet[2469]: I0912 23:57:11.667401 2469 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 12 23:57:11.668133 kubelet[2469]: E0912 23:57:11.668087 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": dial tcp 172.31.25.118:6443: connect: connection refused" interval="200ms" Sep 12 23:57:11.870351 kubelet[2469]: E0912 23:57:11.870300 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": dial tcp 172.31.25.118:6443: connect: connection refused" interval="400ms" Sep 12 23:57:12.271446 kubelet[2469]: E0912 23:57:12.271376 2469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.8?timeout=10s\": dial tcp 172.31.25.118:6443: connect: connection refused" interval="800ms" Sep 12 23:57:12.334164 kubelet[2469]: E0912 23:57:12.334085 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:13.334874 kubelet[2469]: E0912 23:57:13.334810 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:14.335425 kubelet[2469]: E0912 23:57:14.335364 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:15.336362 kubelet[2469]: E0912 23:57:15.336295 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:16.336795 kubelet[2469]: E0912 23:57:16.336717 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:17.337671 kubelet[2469]: E0912 23:57:17.337593 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:18.338500 kubelet[2469]: E0912 23:57:18.338418 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:19.339601 kubelet[2469]: E0912 23:57:19.339549 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 12 23:57:20.339787 kubelet[2469]: E0912 23:57:20.339723 2469 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"