Sep 5 23:54:15.252467 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 5 23:54:15.252517 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:54:15.252543 kernel: KASLR disabled due to lack of seed Sep 5 23:54:15.252561 kernel: efi: EFI v2.7 by EDK II Sep 5 23:54:15.252577 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 5 23:54:15.252593 kernel: ACPI: Early table checksum verification disabled Sep 5 23:54:15.252611 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 5 23:54:15.252627 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 5 23:54:15.252643 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 5 23:54:15.252659 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 5 23:54:15.252679 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 5 23:54:15.252695 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 5 23:54:15.252711 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 5 23:54:15.252728 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 5 23:54:15.252747 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 5 23:54:15.252767 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 5 23:54:15.252785 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 5 23:54:15.252802 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 5 23:54:15.252818 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 5 23:54:15.252835 kernel: printk: bootconsole [uart0] enabled Sep 5 23:54:15.252852 kernel: NUMA: Failed to initialise from firmware Sep 5 23:54:15.252869 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:54:15.252886 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 5 23:54:15.252902 kernel: Zone ranges: Sep 5 23:54:15.252919 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:54:15.252936 kernel: DMA32 empty Sep 5 23:54:15.252956 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 5 23:54:15.252973 kernel: Movable zone start for each node Sep 5 23:54:15.252989 kernel: Early memory node ranges Sep 5 23:54:15.253006 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 5 23:54:15.253023 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 5 23:54:15.253039 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 5 23:54:15.253056 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 5 23:54:15.253073 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 5 23:54:15.253089 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 5 23:54:15.253106 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 5 23:54:15.253122 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 5 23:54:15.253139 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:54:15.253159 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 5 23:54:15.253177 kernel: psci: probing for conduit method from ACPI. Sep 5 23:54:15.253200 kernel: psci: PSCIv1.0 detected in firmware. Sep 5 23:54:15.253218 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:54:15.253235 kernel: psci: Trusted OS migration not required Sep 5 23:54:15.253257 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:54:15.253275 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 5 23:54:15.253293 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:54:15.253310 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:54:15.253328 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:54:15.253345 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:54:15.253392 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:54:15.253411 kernel: CPU features: detected: Spectre-v2 Sep 5 23:54:15.253429 kernel: CPU features: detected: Spectre-v3a Sep 5 23:54:15.253447 kernel: CPU features: detected: Spectre-BHB Sep 5 23:54:15.253464 kernel: CPU features: detected: ARM erratum 1742098 Sep 5 23:54:15.253488 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 5 23:54:15.253506 kernel: alternatives: applying boot alternatives Sep 5 23:54:15.253526 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:15.253545 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:54:15.253563 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:54:15.253581 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:54:15.253599 kernel: Fallback order for Node 0: 0 Sep 5 23:54:15.253616 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 5 23:54:15.253634 kernel: Policy zone: Normal Sep 5 23:54:15.253651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:54:15.253669 kernel: software IO TLB: area num 2. Sep 5 23:54:15.253691 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 5 23:54:15.253709 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Sep 5 23:54:15.253727 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:54:15.253744 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:54:15.253763 kernel: rcu: RCU event tracing is enabled. Sep 5 23:54:15.253782 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:54:15.253800 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:54:15.253818 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:54:15.253837 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:54:15.253857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:54:15.253876 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:54:15.253898 kernel: GICv3: 96 SPIs implemented Sep 5 23:54:15.253918 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:54:15.253936 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:54:15.253954 kernel: GICv3: GICv3 features: 16 PPIs Sep 5 23:54:15.253973 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 5 23:54:15.253992 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 5 23:54:15.254010 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:54:15.254029 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:54:15.254048 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 5 23:54:15.254066 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 5 23:54:15.254085 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 5 23:54:15.254104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:54:15.254128 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 5 23:54:15.254146 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 5 23:54:15.254164 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 5 23:54:15.254182 kernel: Console: colour dummy device 80x25 Sep 5 23:54:15.254201 kernel: printk: console [tty1] enabled Sep 5 23:54:15.254219 kernel: ACPI: Core revision 20230628 Sep 5 23:54:15.254237 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 5 23:54:15.254255 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:54:15.254273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:54:15.254295 kernel: landlock: Up and running. Sep 5 23:54:15.254313 kernel: SELinux: Initializing. Sep 5 23:54:15.254331 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:15.254403 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:54:15.254432 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:15.254451 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:54:15.254470 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:54:15.254488 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:54:15.254507 kernel: Platform MSI: ITS@0x10080000 domain created Sep 5 23:54:15.254531 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 5 23:54:15.254550 kernel: Remapping and enabling EFI services. Sep 5 23:54:15.254568 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:54:15.254586 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:54:15.254604 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 5 23:54:15.254622 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 5 23:54:15.254641 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 5 23:54:15.254659 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:54:15.254676 kernel: SMP: Total of 2 processors activated. Sep 5 23:54:15.254694 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:54:15.254716 kernel: CPU features: detected: 32-bit EL1 Support Sep 5 23:54:15.254734 kernel: CPU features: detected: CRC32 instructions Sep 5 23:54:15.254763 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:54:15.254785 kernel: alternatives: applying system-wide alternatives Sep 5 23:54:15.254804 kernel: devtmpfs: initialized Sep 5 23:54:15.254823 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:54:15.254841 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:54:15.254860 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:54:15.254879 kernel: SMBIOS 3.0.0 present. Sep 5 23:54:15.254903 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 5 23:54:15.254922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:54:15.254942 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:54:15.254961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:54:15.254981 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:54:15.255000 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:54:15.255019 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Sep 5 23:54:15.255042 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:54:15.255061 kernel: cpuidle: using governor menu Sep 5 23:54:15.255080 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:54:15.255099 kernel: ASID allocator initialised with 65536 entries Sep 5 23:54:15.255117 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:54:15.255137 kernel: Serial: AMBA PL011 UART driver Sep 5 23:54:15.255155 kernel: Modules: 17488 pages in range for non-PLT usage Sep 5 23:54:15.255174 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:54:15.255193 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:54:15.255216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:54:15.255235 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:54:15.255255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:54:15.255273 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:54:15.255293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:54:15.255312 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:54:15.255331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:54:15.255377 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:54:15.255404 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:54:15.255458 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:54:15.255480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:54:15.255499 kernel: ACPI: Interpreter enabled Sep 5 23:54:15.255518 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:54:15.255537 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:54:15.255556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 5 23:54:15.255858 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:54:15.256098 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:54:15.256312 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:54:15.257134 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 5 23:54:15.257398 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 5 23:54:15.260135 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 5 23:54:15.260176 kernel: acpiphp: Slot [1] registered Sep 5 23:54:15.260197 kernel: acpiphp: Slot [2] registered Sep 5 23:54:15.260217 kernel: acpiphp: Slot [3] registered Sep 5 23:54:15.260236 kernel: acpiphp: Slot [4] registered Sep 5 23:54:15.260266 kernel: acpiphp: Slot [5] registered Sep 5 23:54:15.260285 kernel: acpiphp: Slot [6] registered Sep 5 23:54:15.260304 kernel: acpiphp: Slot [7] registered Sep 5 23:54:15.260322 kernel: acpiphp: Slot [8] registered Sep 5 23:54:15.260341 kernel: acpiphp: Slot [9] registered Sep 5 23:54:15.260397 kernel: acpiphp: Slot [10] registered Sep 5 23:54:15.260419 kernel: acpiphp: Slot [11] registered Sep 5 23:54:15.260437 kernel: acpiphp: Slot [12] registered Sep 5 23:54:15.260456 kernel: acpiphp: Slot [13] registered Sep 5 23:54:15.260475 kernel: acpiphp: Slot [14] registered Sep 5 23:54:15.260501 kernel: acpiphp: Slot [15] registered Sep 5 23:54:15.260520 kernel: acpiphp: Slot [16] registered Sep 5 23:54:15.260539 kernel: acpiphp: Slot [17] registered Sep 5 23:54:15.260558 kernel: acpiphp: Slot [18] registered Sep 5 23:54:15.260576 kernel: acpiphp: Slot [19] registered Sep 5 23:54:15.260595 kernel: acpiphp: Slot [20] registered Sep 5 23:54:15.260613 kernel: acpiphp: Slot [21] registered Sep 5 23:54:15.260632 kernel: acpiphp: Slot [22] registered Sep 5 23:54:15.260650 kernel: acpiphp: Slot [23] registered Sep 5 23:54:15.260673 kernel: acpiphp: Slot [24] registered Sep 5 23:54:15.260692 kernel: acpiphp: Slot [25] registered Sep 5 23:54:15.260710 kernel: acpiphp: Slot [26] registered Sep 5 23:54:15.260729 kernel: acpiphp: Slot [27] registered Sep 5 23:54:15.260748 kernel: acpiphp: Slot [28] registered Sep 5 23:54:15.260766 kernel: acpiphp: Slot [29] registered Sep 5 23:54:15.260784 kernel: acpiphp: Slot [30] registered Sep 5 23:54:15.260803 kernel: acpiphp: Slot [31] registered Sep 5 23:54:15.260822 kernel: PCI host bridge to bus 0000:00 Sep 5 23:54:15.261075 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 5 23:54:15.261264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:54:15.263769 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 5 23:54:15.264008 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 5 23:54:15.264249 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 5 23:54:15.265601 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 5 23:54:15.265837 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 5 23:54:15.266083 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 5 23:54:15.266296 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 5 23:54:15.267413 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:54:15.267658 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 5 23:54:15.267863 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 5 23:54:15.268089 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 5 23:54:15.268307 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 5 23:54:15.268576 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:54:15.268793 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 5 23:54:15.269002 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 5 23:54:15.269219 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 5 23:54:15.270168 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 5 23:54:15.270512 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 5 23:54:15.270708 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 5 23:54:15.270896 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:54:15.271075 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 5 23:54:15.271100 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:54:15.271120 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:54:15.271140 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:54:15.271159 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:54:15.271177 kernel: iommu: Default domain type: Translated Sep 5 23:54:15.271196 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:54:15.271220 kernel: efivars: Registered efivars operations Sep 5 23:54:15.271239 kernel: vgaarb: loaded Sep 5 23:54:15.271258 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:54:15.271276 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:54:15.271295 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:54:15.271314 kernel: pnp: PnP ACPI init Sep 5 23:54:15.271544 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 5 23:54:15.271573 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:54:15.271600 kernel: NET: Registered PF_INET protocol family Sep 5 23:54:15.271620 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:54:15.271640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:54:15.271659 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:54:15.271678 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:54:15.271697 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:54:15.271717 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:54:15.271736 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:15.271754 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:54:15.271777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:54:15.271796 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:54:15.271815 kernel: kvm [1]: HYP mode not available Sep 5 23:54:15.271833 kernel: Initialise system trusted keyrings Sep 5 23:54:15.271852 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:54:15.271871 kernel: Key type asymmetric registered Sep 5 23:54:15.271890 kernel: Asymmetric key parser 'x509' registered Sep 5 23:54:15.271930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:54:15.271952 kernel: io scheduler mq-deadline registered Sep 5 23:54:15.271977 kernel: io scheduler kyber registered Sep 5 23:54:15.271996 kernel: io scheduler bfq registered Sep 5 23:54:15.272212 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 5 23:54:15.272241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:54:15.272260 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:54:15.272280 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 5 23:54:15.272298 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 23:54:15.272317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:54:15.272342 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:54:15.274664 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 5 23:54:15.274694 kernel: printk: console [ttyS0] disabled Sep 5 23:54:15.274715 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 5 23:54:15.274734 kernel: printk: console [ttyS0] enabled Sep 5 23:54:15.274753 kernel: printk: bootconsole [uart0] disabled Sep 5 23:54:15.274771 kernel: thunder_xcv, ver 1.0 Sep 5 23:54:15.274790 kernel: thunder_bgx, ver 1.0 Sep 5 23:54:15.274808 kernel: nicpf, ver 1.0 Sep 5 23:54:15.274835 kernel: nicvf, ver 1.0 Sep 5 23:54:15.275060 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:54:15.275253 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:54:14 UTC (1757116454) Sep 5 23:54:15.275279 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:54:15.275299 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 5 23:54:15.275318 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:54:15.275337 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:54:15.275377 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:54:15.275406 kernel: Segment Routing with IPv6 Sep 5 23:54:15.275426 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:54:15.275445 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:54:15.275464 kernel: Key type dns_resolver registered Sep 5 23:54:15.275483 kernel: registered taskstats version 1 Sep 5 23:54:15.275502 kernel: Loading compiled-in X.509 certificates Sep 5 23:54:15.275520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:54:15.275539 kernel: Key type .fscrypt registered Sep 5 23:54:15.275557 kernel: Key type fscrypt-provisioning registered Sep 5 23:54:15.275579 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:54:15.275599 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:54:15.275617 kernel: ima: No architecture policies found Sep 5 23:54:15.275636 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:54:15.275655 kernel: clk: Disabling unused clocks Sep 5 23:54:15.275673 kernel: Freeing unused kernel memory: 39424K Sep 5 23:54:15.275692 kernel: Run /init as init process Sep 5 23:54:15.275711 kernel: with arguments: Sep 5 23:54:15.275730 kernel: /init Sep 5 23:54:15.275748 kernel: with environment: Sep 5 23:54:15.275771 kernel: HOME=/ Sep 5 23:54:15.275790 kernel: TERM=linux Sep 5 23:54:15.275809 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:54:15.275832 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:54:15.275856 systemd[1]: Detected virtualization amazon. Sep 5 23:54:15.275877 systemd[1]: Detected architecture arm64. Sep 5 23:54:15.275897 systemd[1]: Running in initrd. Sep 5 23:54:15.275943 systemd[1]: No hostname configured, using default hostname. Sep 5 23:54:15.275965 systemd[1]: Hostname set to . Sep 5 23:54:15.275987 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:54:15.276008 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:54:15.276028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:15.276049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:15.276071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:54:15.276092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:54:15.276117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:54:15.276139 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:54:15.276163 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:54:15.276184 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:54:15.276205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:15.276226 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:15.276247 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:54:15.276272 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:54:15.276293 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:54:15.276313 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:54:15.276334 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:15.279423 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:15.279467 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:54:15.279489 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:54:15.279510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:15.279531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:15.279563 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:15.279584 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:54:15.279605 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:54:15.279625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:54:15.279646 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:54:15.279667 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:54:15.279687 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:54:15.279708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:54:15.279733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:15.279754 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:15.279775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:15.279795 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:54:15.279864 systemd-journald[251]: Collecting audit messages is disabled. Sep 5 23:54:15.279935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:54:15.279960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:54:15.279980 kernel: Bridge firewalling registered Sep 5 23:54:15.280001 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:15.280027 systemd-journald[251]: Journal started Sep 5 23:54:15.280066 systemd-journald[251]: Runtime Journal (/run/log/journal/ec24e4afcfbddd42d34fbf6deb3211c4) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:54:15.216228 systemd-modules-load[252]: Inserted module 'overlay' Sep 5 23:54:15.287796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:54:15.265454 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 5 23:54:15.297033 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:54:15.297989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:15.306608 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:54:15.324831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:15.335593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:54:15.356255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:54:15.365461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:15.383686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:15.397778 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:54:15.406189 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:15.411935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:15.428809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:54:15.446219 dracut-cmdline[284]: dracut-dracut-053 Sep 5 23:54:15.455419 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:54:15.515550 systemd-resolved[288]: Positive Trust Anchors: Sep 5 23:54:15.516097 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:54:15.516162 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:54:15.622076 kernel: SCSI subsystem initialized Sep 5 23:54:15.628472 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:54:15.641469 kernel: iscsi: registered transport (tcp) Sep 5 23:54:15.663963 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:54:15.664038 kernel: QLogic iSCSI HBA Driver Sep 5 23:54:15.744386 kernel: random: crng init done Sep 5 23:54:15.743823 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 5 23:54:15.746968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:54:15.752112 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:15.774905 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:15.793395 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:54:15.823865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:54:15.823970 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:54:15.825795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:54:15.891399 kernel: raid6: neonx8 gen() 6708 MB/s Sep 5 23:54:15.908385 kernel: raid6: neonx4 gen() 6556 MB/s Sep 5 23:54:15.925385 kernel: raid6: neonx2 gen() 5458 MB/s Sep 5 23:54:15.942384 kernel: raid6: neonx1 gen() 3960 MB/s Sep 5 23:54:15.959385 kernel: raid6: int64x8 gen() 3832 MB/s Sep 5 23:54:15.976385 kernel: raid6: int64x4 gen() 3708 MB/s Sep 5 23:54:15.993385 kernel: raid6: int64x2 gen() 3607 MB/s Sep 5 23:54:16.011318 kernel: raid6: int64x1 gen() 2765 MB/s Sep 5 23:54:16.011371 kernel: raid6: using algorithm neonx8 gen() 6708 MB/s Sep 5 23:54:16.029293 kernel: raid6: .... xor() 4844 MB/s, rmw enabled Sep 5 23:54:16.029336 kernel: raid6: using neon recovery algorithm Sep 5 23:54:16.037395 kernel: xor: measuring software checksum speed Sep 5 23:54:16.037468 kernel: 8regs : 10328 MB/sec Sep 5 23:54:16.039385 kernel: 32regs : 10996 MB/sec Sep 5 23:54:16.041590 kernel: arm64_neon : 8971 MB/sec Sep 5 23:54:16.041630 kernel: xor: using function: 32regs (10996 MB/sec) Sep 5 23:54:16.125398 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:54:16.145133 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:16.154610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:16.195205 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 5 23:54:16.202934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:16.218729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:54:16.255417 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 5 23:54:16.312954 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:16.326693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:54:16.448331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:16.463671 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:54:16.505952 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:16.513913 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:16.516655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:16.520286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:54:16.535481 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:54:16.582945 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:16.658002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:16.658868 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:16.671146 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:16.676655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:16.677045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:16.688140 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:16.703781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:16.721297 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:54:16.721338 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 5 23:54:16.721676 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 5 23:54:16.721920 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 5 23:54:16.731419 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:6a:d4:81:7e:eb Sep 5 23:54:16.734311 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:16.759430 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:54:16.762080 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 5 23:54:16.764512 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:16.777398 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 5 23:54:16.779644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:54:16.795670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:54:16.795739 kernel: GPT:9289727 != 16777215 Sep 5 23:54:16.795775 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:54:16.796945 kernel: GPT:9289727 != 16777215 Sep 5 23:54:16.797808 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:54:16.800437 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:54:16.810712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:16.903390 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (523) Sep 5 23:54:16.910058 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (541) Sep 5 23:54:16.975722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 5 23:54:17.023986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 5 23:54:17.053158 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:54:17.068825 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 5 23:54:17.072118 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 5 23:54:17.091709 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:54:17.105022 disk-uuid[660]: Primary Header is updated. Sep 5 23:54:17.105022 disk-uuid[660]: Secondary Entries is updated. Sep 5 23:54:17.105022 disk-uuid[660]: Secondary Header is updated. Sep 5 23:54:17.114461 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:54:17.125632 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:54:17.137396 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:54:18.147449 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:54:18.149173 disk-uuid[661]: The operation has completed successfully. Sep 5 23:54:18.332944 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:54:18.333182 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:54:18.379082 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:54:18.391111 sh[1003]: Success Sep 5 23:54:18.411383 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:54:18.520877 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:54:18.539577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:54:18.548619 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:54:18.575759 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:54:18.575821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:18.578384 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:54:18.578420 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:54:18.579469 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:54:18.692402 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:54:18.727690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:54:18.732206 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:54:18.745598 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:54:18.755940 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:54:18.777668 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:18.777759 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:18.777796 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:54:18.794390 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:54:18.815547 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:54:18.818215 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:18.831082 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:54:18.841753 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:54:18.942610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:18.954777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:54:19.019486 systemd-networkd[1195]: lo: Link UP Sep 5 23:54:19.019507 systemd-networkd[1195]: lo: Gained carrier Sep 5 23:54:19.023154 systemd-networkd[1195]: Enumeration completed Sep 5 23:54:19.024261 systemd-networkd[1195]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:19.025793 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:54:19.029005 systemd-networkd[1195]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:19.029946 systemd[1]: Reached target network.target - Network. Sep 5 23:54:19.043415 systemd-networkd[1195]: eth0: Link UP Sep 5 23:54:19.043423 systemd-networkd[1195]: eth0: Gained carrier Sep 5 23:54:19.043440 systemd-networkd[1195]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:19.061441 systemd-networkd[1195]: eth0: DHCPv4 address 172.31.22.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:54:19.304129 ignition[1112]: Ignition 2.19.0 Sep 5 23:54:19.304681 ignition[1112]: Stage: fetch-offline Sep 5 23:54:19.306821 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:19.306845 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:19.308565 ignition[1112]: Ignition finished successfully Sep 5 23:54:19.315824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:19.327768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:54:19.351475 ignition[1206]: Ignition 2.19.0 Sep 5 23:54:19.351503 ignition[1206]: Stage: fetch Sep 5 23:54:19.352162 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:19.352188 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:19.352395 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:19.364187 ignition[1206]: PUT result: OK Sep 5 23:54:19.369543 ignition[1206]: parsed url from cmdline: "" Sep 5 23:54:19.369679 ignition[1206]: no config URL provided Sep 5 23:54:19.369698 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:54:19.369748 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:54:19.369788 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:19.379579 ignition[1206]: PUT result: OK Sep 5 23:54:19.379670 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 5 23:54:19.383886 ignition[1206]: GET result: OK Sep 5 23:54:19.383996 ignition[1206]: parsing config with SHA512: 3f2f608b064e48709c9f63db100db60521ffd9699fa0070d776c68717bd4ea4ac75a1df45a7c122cc937e8589288609176cd5ebb1bf200f3060a37339076d7d3 Sep 5 23:54:19.390578 unknown[1206]: fetched base config from "system" Sep 5 23:54:19.391390 unknown[1206]: fetched base config from "system" Sep 5 23:54:19.391957 ignition[1206]: fetch: fetch complete Sep 5 23:54:19.391410 unknown[1206]: fetched user config from "aws" Sep 5 23:54:19.391968 ignition[1206]: fetch: fetch passed Sep 5 23:54:19.392054 ignition[1206]: Ignition finished successfully Sep 5 23:54:19.405414 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:54:19.415779 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:54:19.444420 ignition[1212]: Ignition 2.19.0 Sep 5 23:54:19.444448 ignition[1212]: Stage: kargs Sep 5 23:54:19.446253 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:19.446693 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:19.446861 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:19.449546 ignition[1212]: PUT result: OK Sep 5 23:54:19.457833 ignition[1212]: kargs: kargs passed Sep 5 23:54:19.457978 ignition[1212]: Ignition finished successfully Sep 5 23:54:19.465071 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:54:19.475895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:54:19.501795 ignition[1219]: Ignition 2.19.0 Sep 5 23:54:19.502341 ignition[1219]: Stage: disks Sep 5 23:54:19.503495 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:19.503521 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:19.503679 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:19.512219 ignition[1219]: PUT result: OK Sep 5 23:54:19.516239 ignition[1219]: disks: disks passed Sep 5 23:54:19.516406 ignition[1219]: Ignition finished successfully Sep 5 23:54:19.523712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:54:19.528378 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:19.530999 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:54:19.535803 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:54:19.540472 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:54:19.543088 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:54:19.556670 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:54:19.604163 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 23:54:19.611239 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:54:19.624714 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:54:19.704405 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:54:19.705982 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:54:19.710151 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:54:19.733492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:19.740564 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:54:19.746959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 23:54:19.747068 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:54:19.747116 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:19.770386 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Sep 5 23:54:19.774370 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:19.774420 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:19.774447 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:54:19.781537 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:54:19.789683 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:54:19.798389 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:54:19.806748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:20.165872 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:54:20.188537 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:54:20.197972 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:54:20.207786 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:54:20.589934 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:20.599586 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:54:20.608217 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:54:20.632104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:54:20.634876 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:20.664931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:54:20.678394 ignition[1359]: INFO : Ignition 2.19.0 Sep 5 23:54:20.678394 ignition[1359]: INFO : Stage: mount Sep 5 23:54:20.682283 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:20.682283 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:20.682283 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:20.690954 ignition[1359]: INFO : PUT result: OK Sep 5 23:54:20.696049 ignition[1359]: INFO : mount: mount passed Sep 5 23:54:20.696049 ignition[1359]: INFO : Ignition finished successfully Sep 5 23:54:20.696804 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:54:20.714763 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:54:20.732469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:54:20.769921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Sep 5 23:54:20.769986 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:54:20.770013 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:54:20.772807 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:54:20.778397 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:54:20.781442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:54:20.824813 ignition[1388]: INFO : Ignition 2.19.0 Sep 5 23:54:20.824813 ignition[1388]: INFO : Stage: files Sep 5 23:54:20.828477 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:20.828477 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:20.828477 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:20.836402 ignition[1388]: INFO : PUT result: OK Sep 5 23:54:20.840857 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:54:20.843760 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:54:20.843760 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:54:20.875005 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:54:20.878228 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:54:20.881606 unknown[1388]: wrote ssh authorized keys file for user: core Sep 5 23:54:20.884075 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:54:20.886984 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:54:20.886984 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 5 23:54:20.886984 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:54:20.898454 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:54:21.044642 systemd-networkd[1195]: eth0: Gained IPv6LL Sep 5 23:54:21.393160 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Sep 5 23:54:21.770071 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:54:21.770071 ignition[1388]: INFO : files: op(8): [started] processing unit "containerd.service" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: op(8): [finished] processing unit "containerd.service" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:54:21.777270 ignition[1388]: INFO : files: files passed Sep 5 23:54:21.777270 ignition[1388]: INFO : Ignition finished successfully Sep 5 23:54:21.782440 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:54:21.799732 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:54:21.810853 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:54:21.834834 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:54:21.837689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:54:21.858158 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:21.858158 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:21.865429 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:54:21.871130 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:21.874340 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:54:21.892191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:54:21.942277 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:54:21.943639 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:54:21.947637 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:54:21.952246 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:54:21.954607 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:54:21.969607 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:54:22.000451 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:22.012664 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:54:22.046033 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:54:22.046473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:54:22.054415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:22.060737 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:22.065137 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:54:22.069159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:54:22.069292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:54:22.075817 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:54:22.078047 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:54:22.079980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:54:22.082253 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:54:22.084955 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:54:22.087382 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:54:22.089614 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:54:22.094621 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:54:22.096837 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:54:22.098925 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:54:22.100936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:54:22.101040 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:54:22.108590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:22.125577 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:22.125667 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:54:22.132062 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:22.134630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:54:22.134739 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:54:22.145269 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:54:22.145928 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:54:22.149754 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:54:22.150574 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:54:22.167900 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:54:22.199558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:54:22.201680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:54:22.201793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:22.204506 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:54:22.204608 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:54:22.236957 ignition[1442]: INFO : Ignition 2.19.0 Sep 5 23:54:22.240064 ignition[1442]: INFO : Stage: umount Sep 5 23:54:22.242241 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:54:22.248294 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:54:22.250693 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:54:22.253676 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:54:22.255767 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:54:22.256010 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:54:22.265254 ignition[1442]: INFO : PUT result: OK Sep 5 23:54:22.269655 ignition[1442]: INFO : umount: umount passed Sep 5 23:54:22.272056 ignition[1442]: INFO : Ignition finished successfully Sep 5 23:54:22.273221 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:54:22.273433 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:54:22.283530 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:54:22.283712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:54:22.289780 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:54:22.289874 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:54:22.292136 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:54:22.292218 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:54:22.296680 systemd[1]: Stopped target network.target - Network. Sep 5 23:54:22.300185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:54:22.300651 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:54:22.304948 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:54:22.308442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:54:22.310396 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:22.310516 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:54:22.314701 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:54:22.318668 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:54:22.318749 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:54:22.325144 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:54:22.325653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:54:22.329439 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:54:22.329538 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:54:22.331378 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:54:22.331460 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:54:22.338784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:54:22.338871 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:54:22.343104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:54:22.346090 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:54:22.358430 systemd-networkd[1195]: eth0: DHCPv6 lease lost Sep 5 23:54:22.366448 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:54:22.367018 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:54:22.374458 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:54:22.374663 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:54:22.386416 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:54:22.386528 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:22.413524 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:54:22.415554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:54:22.415665 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:54:22.418647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:54:22.418727 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:22.422501 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:54:22.422598 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:22.425384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:54:22.425473 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:22.428716 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:22.478257 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:54:22.478806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:22.485098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:54:22.485182 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:22.488091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:54:22.488162 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:22.490701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:54:22.490799 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:54:22.509026 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:54:22.509134 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:54:22.511732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:54:22.511820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:54:22.531581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:54:22.534195 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:54:22.534308 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:22.537829 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:54:22.537910 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:54:22.540792 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:54:22.540869 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:22.545290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:54:22.545409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:22.559306 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:54:22.559515 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:54:22.594977 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:54:22.595398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:54:22.604877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:54:22.615781 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:54:22.642300 systemd[1]: Switching root. Sep 5 23:54:22.671461 systemd-journald[251]: Journal stopped Sep 5 23:54:24.874774 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 5 23:54:24.874919 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:54:24.874962 kernel: SELinux: policy capability open_perms=1 Sep 5 23:54:24.874991 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:54:24.875024 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:54:24.875053 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:54:24.875084 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:54:24.875113 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:54:24.875143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:54:24.875172 kernel: audit: type=1403 audit(1757116463.297:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:54:24.875209 systemd[1]: Successfully loaded SELinux policy in 51.073ms. Sep 5 23:54:24.875254 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.299ms. Sep 5 23:54:24.875292 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:54:24.875322 systemd[1]: Detected virtualization amazon. Sep 5 23:54:24.875600 systemd[1]: Detected architecture arm64. Sep 5 23:54:24.877797 systemd[1]: Detected first boot. Sep 5 23:54:24.877838 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:54:24.877871 zram_generator::config[1505]: No configuration found. Sep 5 23:54:24.877908 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:54:24.877951 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:54:24.877984 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 5 23:54:24.878023 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:54:24.878055 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:54:24.878088 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:54:24.878117 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:54:24.878150 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:54:24.878180 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:54:24.878211 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:54:24.878242 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:54:24.878277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:54:24.878309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:54:24.878340 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:54:24.881543 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:54:24.881590 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:54:24.881622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:54:24.881654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 23:54:24.881686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:54:24.881717 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:54:24.881753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:54:24.881786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:54:24.881816 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:54:24.881847 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:54:24.881877 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:54:24.881907 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:54:24.881940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:54:24.881972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:54:24.882006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:54:24.882036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:54:24.882068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:54:24.882098 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:54:24.882129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:54:24.882159 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:54:24.882191 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:54:24.882225 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:54:24.882256 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:54:24.882291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:54:24.882325 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:54:24.885518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:24.885586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:54:24.885617 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:54:24.885648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:24.885680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:54:24.885712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:24.885745 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:54:24.885784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:24.885815 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:54:24.885847 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 5 23:54:24.885881 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 5 23:54:24.885913 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:54:24.885943 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:54:24.885974 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:54:24.886004 kernel: fuse: init (API version 7.39) Sep 5 23:54:24.886038 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:54:24.886077 kernel: loop: module loaded Sep 5 23:54:24.886105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:54:24.886135 kernel: ACPI: bus type drm_connector registered Sep 5 23:54:24.886166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:54:24.886196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:54:24.886225 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:54:24.886254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:54:24.886285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:54:24.886319 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:54:24.888396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:54:24.888459 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:54:24.888490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:54:24.888521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:24.888552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:24.888581 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:54:24.888613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:54:24.888650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:24.888682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:24.888714 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:54:24.888744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:54:24.888773 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:24.888802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:24.888836 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:54:24.888920 systemd-journald[1601]: Collecting audit messages is disabled. Sep 5 23:54:24.888973 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:54:24.889003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:54:24.889032 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:54:24.889062 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:54:24.889093 systemd-journald[1601]: Journal started Sep 5 23:54:24.889146 systemd-journald[1601]: Runtime Journal (/run/log/journal/ec24e4afcfbddd42d34fbf6deb3211c4) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:54:24.907894 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:54:24.907969 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:54:24.939880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:54:24.945823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:54:24.955012 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:54:24.955100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:54:24.976380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:54:25.009375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:54:25.009462 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:54:25.020954 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:54:25.023915 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:54:25.033775 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:54:25.041983 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:54:25.093526 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:54:25.104969 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:54:25.112717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:54:25.129330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:54:25.149331 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:54:25.164956 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 5 23:54:25.165006 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Sep 5 23:54:25.168518 systemd-journald[1601]: Time spent on flushing to /var/log/journal/ec24e4afcfbddd42d34fbf6deb3211c4 is 41.199ms for 887 entries. Sep 5 23:54:25.168518 systemd-journald[1601]: System Journal (/var/log/journal/ec24e4afcfbddd42d34fbf6deb3211c4) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:54:25.215939 systemd-journald[1601]: Received client request to flush runtime journal. Sep 5 23:54:25.193031 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:54:25.207640 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:54:25.216618 udevadm[1665]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 23:54:25.225094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:54:25.279862 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:54:25.295687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:54:25.329833 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Sep 5 23:54:25.329874 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Sep 5 23:54:25.341209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:54:26.000039 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:54:26.009670 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:54:26.074008 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Sep 5 23:54:26.109442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:54:26.130695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:54:26.167942 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:54:26.241000 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 5 23:54:26.251706 (udev-worker)[1686]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:26.326822 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:54:26.491295 systemd-networkd[1689]: lo: Link UP Sep 5 23:54:26.491322 systemd-networkd[1689]: lo: Gained carrier Sep 5 23:54:26.494915 systemd-networkd[1689]: Enumeration completed Sep 5 23:54:26.495141 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:54:26.500863 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:26.500886 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:54:26.505657 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:54:26.510109 systemd-networkd[1689]: eth0: Link UP Sep 5 23:54:26.510628 systemd-networkd[1689]: eth0: Gained carrier Sep 5 23:54:26.510661 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:54:26.521347 systemd-networkd[1689]: eth0: DHCPv4 address 172.31.22.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:54:26.545805 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1703) Sep 5 23:54:26.601164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:54:26.764055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:54:26.781151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:54:26.822268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:54:26.843608 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:54:26.864474 lvm[1810]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:54:26.905774 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:54:26.912144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:54:26.920670 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:54:26.936823 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:54:26.975932 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:54:26.981527 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:54:26.984272 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:54:26.984333 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:54:26.986944 systemd[1]: Reached target machines.target - Containers. Sep 5 23:54:26.990909 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:54:27.002737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:54:27.009639 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:54:27.014835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:27.026677 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:54:27.042044 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:54:27.055998 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:54:27.061183 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:54:27.067392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:54:27.112454 kernel: loop0: detected capacity change from 0 to 114432 Sep 5 23:54:27.122285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:54:27.124197 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:54:27.146468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:54:27.170405 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 23:54:27.287407 kernel: loop2: detected capacity change from 0 to 114328 Sep 5 23:54:27.349679 kernel: loop3: detected capacity change from 0 to 52536 Sep 5 23:54:27.452431 kernel: loop4: detected capacity change from 0 to 114432 Sep 5 23:54:27.480429 kernel: loop5: detected capacity change from 0 to 203944 Sep 5 23:54:27.511639 kernel: loop6: detected capacity change from 0 to 114328 Sep 5 23:54:27.531397 kernel: loop7: detected capacity change from 0 to 52536 Sep 5 23:54:27.548907 (sd-merge)[1835]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 5 23:54:27.550958 (sd-merge)[1835]: Merged extensions into '/usr'. Sep 5 23:54:27.560342 systemd[1]: Reloading requested from client PID 1822 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:54:27.560408 systemd[1]: Reloading... Sep 5 23:54:27.636744 systemd-networkd[1689]: eth0: Gained IPv6LL Sep 5 23:54:27.681380 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:54:27.719402 zram_generator::config[1868]: No configuration found. Sep 5 23:54:27.969455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:28.120617 systemd[1]: Reloading finished in 559 ms. Sep 5 23:54:28.146543 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:54:28.153475 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:54:28.156742 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:54:28.176699 systemd[1]: Starting ensure-sysext.service... Sep 5 23:54:28.184649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:54:28.196597 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:54:28.196793 systemd[1]: Reloading... Sep 5 23:54:28.233268 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:54:28.233988 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:54:28.235862 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:54:28.236457 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 5 23:54:28.236612 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 5 23:54:28.242531 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:54:28.242558 systemd-tmpfiles[1925]: Skipping /boot Sep 5 23:54:28.264534 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:54:28.264555 systemd-tmpfiles[1925]: Skipping /boot Sep 5 23:54:28.357464 zram_generator::config[1952]: No configuration found. Sep 5 23:54:28.595436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:28.746714 systemd[1]: Reloading finished in 549 ms. Sep 5 23:54:28.778951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:54:28.800682 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:54:28.815525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:54:28.829753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:54:28.837881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:54:28.849708 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:54:28.874915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:28.885946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:28.893863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:54:28.913860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:54:28.918430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:28.937336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:28.941022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:28.952183 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:54:28.960327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:28.961495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:28.975620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:54:28.986025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:54:29.008545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:54:29.010951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:54:29.011332 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:54:29.020817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:54:29.028618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:54:29.029017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:54:29.035039 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:54:29.037275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:54:29.044108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:54:29.044506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:54:29.059603 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:54:29.060035 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:54:29.077755 systemd[1]: Finished ensure-sysext.service. Sep 5 23:54:29.082738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:54:29.082859 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:54:29.092766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:54:29.112300 augenrules[2056]: No rules Sep 5 23:54:29.114227 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:54:29.125932 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:54:29.135886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:54:29.151179 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:54:29.209791 systemd-resolved[2017]: Positive Trust Anchors: Sep 5 23:54:29.209829 systemd-resolved[2017]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:54:29.209897 systemd-resolved[2017]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:54:29.222629 systemd-resolved[2017]: Defaulting to hostname 'linux'. Sep 5 23:54:29.226107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:54:29.229079 systemd[1]: Reached target network.target - Network. Sep 5 23:54:29.231031 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:54:29.233472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:54:29.236040 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:54:29.238428 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:54:29.241240 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:54:29.244164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:54:29.246647 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:54:29.249415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:54:29.252435 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:54:29.252608 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:54:29.254620 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:54:29.257972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:54:29.263457 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:54:29.268230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:54:29.273231 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:54:29.275755 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:54:29.277913 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:54:29.280287 systemd[1]: System is tainted: cgroupsv1 Sep 5 23:54:29.280384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:54:29.280439 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:54:29.284034 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:54:29.297736 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:54:29.303837 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:54:29.319221 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:54:29.337559 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:54:29.345975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:54:29.358746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:29.363447 jq[2074]: false Sep 5 23:54:29.366655 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:54:29.380854 systemd[1]: Started ntpd.service - Network Time Service. Sep 5 23:54:29.396831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:54:29.417870 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 5 23:54:29.427307 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:54:29.443866 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:54:29.463507 dbus-daemon[2072]: [system] SELinux support is enabled Sep 5 23:54:29.469650 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:54:29.478154 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:54:29.483454 dbus-daemon[2072]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1689 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 5 23:54:29.502957 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:54:29.510158 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:54:29.520649 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:54:29.542033 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:54:29.545632 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:54:29.559545 extend-filesystems[2075]: Found loop4 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found loop5 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found loop6 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found loop7 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found nvme0n1 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found nvme0n1p1 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found nvme0n1p2 Sep 5 23:54:29.559545 extend-filesystems[2075]: Found nvme0n1p3 Sep 5 23:54:29.566929 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:54:29.606637 extend-filesystems[2075]: Found usr Sep 5 23:54:29.606637 extend-filesystems[2075]: Found nvme0n1p4 Sep 5 23:54:29.606637 extend-filesystems[2075]: Found nvme0n1p6 Sep 5 23:54:29.606637 extend-filesystems[2075]: Found nvme0n1p7 Sep 5 23:54:29.606637 extend-filesystems[2075]: Found nvme0n1p9 Sep 5 23:54:29.606637 extend-filesystems[2075]: Checking size of /dev/nvme0n1p9 Sep 5 23:54:29.649996 jq[2096]: true Sep 5 23:54:29.574744 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:54:29.624555 ntpd[2081]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: ---------------------------------------------------- Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: corporation. Support and training for ntp-4 are Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: available at https://www.nwtime.org/support Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: ---------------------------------------------------- Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: proto: precision = 0.108 usec (-23) Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: basedate set to 2025-08-24 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: gps base set to 2025-08-24 (week 2381) Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen normally on 3 eth0 172.31.22.173:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen normally on 4 lo [::1]:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listen normally on 5 eth0 [fe80::46a:d4ff:fe81:7eeb%2]:123 Sep 5 23:54:29.661237 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: Listening on routing socket on fd #22 for interface updates Sep 5 23:54:29.677816 update_engine[2094]: I20250905 23:54:29.630257 2094 main.cc:92] Flatcar Update Engine starting Sep 5 23:54:29.677816 update_engine[2094]: I20250905 23:54:29.641211 2094 update_check_scheduler.cc:74] Next update check in 10m23s Sep 5 23:54:29.624602 ntpd[2081]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:54:29.677269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:54:29.764006 extend-filesystems[2075]: Resized partition /dev/nvme0n1p9 Sep 5 23:54:29.770566 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.742 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch failed with 404: resource not found Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 5 23:54:29.770617 coreos-metadata[2071]: Sep 05 23:54:29.743 INFO Fetch successful Sep 5 23:54:29.624622 ntpd[2081]: ---------------------------------------------------- Sep 5 23:54:29.800071 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:54:29.800071 ntpd[2081]: 5 Sep 23:54:29 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:54:29.821138 extend-filesystems[2123]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:54:29.780286 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:54:29.624641 ntpd[2081]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:54:29.780835 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:54:29.624660 ntpd[2081]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:54:29.828883 jq[2115]: true Sep 5 23:54:29.790958 (ntainerd)[2120]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:54:29.624679 ntpd[2081]: corporation. Support and training for ntp-4 are Sep 5 23:54:29.624697 ntpd[2081]: available at https://www.nwtime.org/support Sep 5 23:54:29.624716 ntpd[2081]: ---------------------------------------------------- Sep 5 23:54:29.632161 ntpd[2081]: proto: precision = 0.108 usec (-23) Sep 5 23:54:29.632604 ntpd[2081]: basedate set to 2025-08-24 Sep 5 23:54:29.632630 ntpd[2081]: gps base set to 2025-08-24 (week 2381) Sep 5 23:54:29.651591 ntpd[2081]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:54:29.651673 ntpd[2081]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:54:29.651951 ntpd[2081]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:54:29.652022 ntpd[2081]: Listen normally on 3 eth0 172.31.22.173:123 Sep 5 23:54:29.652090 ntpd[2081]: Listen normally on 4 lo [::1]:123 Sep 5 23:54:29.652158 ntpd[2081]: Listen normally on 5 eth0 [fe80::46a:d4ff:fe81:7eeb%2]:123 Sep 5 23:54:29.652218 ntpd[2081]: Listening on routing socket on fd #22 for interface updates Sep 5 23:54:29.717602 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:54:29.717653 ntpd[2081]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:54:29.870995 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 23:54:29.893586 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 5 23:54:29.902492 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:54:29.913970 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:54:29.927998 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:54:29.944962 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 5 23:54:29.949747 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:54:29.949914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:54:29.949956 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:54:29.978419 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 5 23:54:29.996648 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 5 23:54:29.999037 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:54:29.999081 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:54:30.003481 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:54:30.022519 extend-filesystems[2123]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 5 23:54:30.022519 extend-filesystems[2123]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:54:30.022519 extend-filesystems[2123]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 5 23:54:30.010662 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:54:30.060905 extend-filesystems[2075]: Resized filesystem in /dev/nvme0n1p9 Sep 5 23:54:30.031117 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:54:30.031716 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:54:30.101508 systemd-logind[2089]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:54:30.101561 systemd-logind[2089]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 5 23:54:30.101923 systemd-logind[2089]: New seat seat0. Sep 5 23:54:30.107451 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:54:30.154525 bash[2180]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:54:30.163291 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:54:30.185965 systemd[1]: Starting sshkeys.service... Sep 5 23:54:30.257612 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:54:30.269618 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2184) Sep 5 23:54:30.275832 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:54:30.298756 amazon-ssm-agent[2158]: Initializing new seelog logger Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 processing appconfig overrides Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 processing appconfig overrides Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 processing appconfig overrides Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO Proxy environment variables: Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:54:30.347700 amazon-ssm-agent[2158]: 2025/09/05 23:54:30 processing appconfig overrides Sep 5 23:54:30.414386 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO https_proxy: Sep 5 23:54:30.516459 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO http_proxy: Sep 5 23:54:30.615792 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO no_proxy: Sep 5 23:54:30.689179 coreos-metadata[2198]: Sep 05 23:54:30.687 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:54:30.689179 coreos-metadata[2198]: Sep 05 23:54:30.689 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 5 23:54:30.694228 coreos-metadata[2198]: Sep 05 23:54:30.692 INFO Fetch successful Sep 5 23:54:30.694228 coreos-metadata[2198]: Sep 05 23:54:30.692 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 5 23:54:30.694228 coreos-metadata[2198]: Sep 05 23:54:30.694 INFO Fetch successful Sep 5 23:54:30.707105 unknown[2198]: wrote ssh authorized keys file for user: core Sep 5 23:54:30.727118 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO Checking if agent identity type OnPrem can be assumed Sep 5 23:54:30.776436 update-ssh-keys[2292]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:54:30.781427 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:54:30.806290 systemd[1]: Finished sshkeys.service. Sep 5 23:54:30.827997 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO Checking if agent identity type EC2 can be assumed Sep 5 23:54:30.910584 locksmithd[2170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:54:30.930389 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO Agent will take identity from EC2 Sep 5 23:54:31.003415 containerd[2120]: time="2025-09-05T23:54:31.003235773Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:54:31.008282 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 5 23:54:31.008543 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 5 23:54:31.014756 dbus-daemon[2072]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2166 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 5 23:54:31.027910 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:54:31.032033 systemd[1]: Starting polkit.service - Authorization Manager... Sep 5 23:54:31.060410 polkitd[2311]: Started polkitd version 121 Sep 5 23:54:31.085908 polkitd[2311]: Loading rules from directory /etc/polkit-1/rules.d Sep 5 23:54:31.086033 polkitd[2311]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 5 23:54:31.090251 polkitd[2311]: Finished loading, compiling and executing 2 rules Sep 5 23:54:31.091683 dbus-daemon[2072]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 5 23:54:31.092102 systemd[1]: Started polkit.service - Authorization Manager. Sep 5 23:54:31.098409 polkitd[2311]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 5 23:54:31.115728 containerd[2120]: time="2025-09-05T23:54:31.115609270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.121926 containerd[2120]: time="2025-09-05T23:54:31.121847782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:31.121926 containerd[2120]: time="2025-09-05T23:54:31.121917958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:54:31.122114 containerd[2120]: time="2025-09-05T23:54:31.121954318Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:54:31.122504 containerd[2120]: time="2025-09-05T23:54:31.122249902Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:54:31.122504 containerd[2120]: time="2025-09-05T23:54:31.122303194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.122504 containerd[2120]: time="2025-09-05T23:54:31.122485186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:31.122671 containerd[2120]: time="2025-09-05T23:54:31.122516278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.122930 containerd[2120]: time="2025-09-05T23:54:31.122877622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:31.122987 containerd[2120]: time="2025-09-05T23:54:31.122924314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.122987 containerd[2120]: time="2025-09-05T23:54:31.122957530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:31.123086 containerd[2120]: time="2025-09-05T23:54:31.122985046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.124381 containerd[2120]: time="2025-09-05T23:54:31.123146398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.126375 containerd[2120]: time="2025-09-05T23:54:31.125070802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:54:31.126375 containerd[2120]: time="2025-09-05T23:54:31.126153106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:54:31.126375 containerd[2120]: time="2025-09-05T23:54:31.126194578Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:54:31.127561 containerd[2120]: time="2025-09-05T23:54:31.127507930Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:54:31.127691 containerd[2120]: time="2025-09-05T23:54:31.127648630Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:54:31.128468 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:54:31.128947 systemd-resolved[2017]: System hostname changed to 'ip-172-31-22-173'. Sep 5 23:54:31.128948 systemd-hostnamed[2166]: Hostname set to (transient) Sep 5 23:54:31.134491 containerd[2120]: time="2025-09-05T23:54:31.134416930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:54:31.134693 containerd[2120]: time="2025-09-05T23:54:31.134525890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:54:31.134693 containerd[2120]: time="2025-09-05T23:54:31.134640322Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:54:31.134859 containerd[2120]: time="2025-09-05T23:54:31.134699230Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:54:31.134859 containerd[2120]: time="2025-09-05T23:54:31.134735146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:54:31.135034 containerd[2120]: time="2025-09-05T23:54:31.134992102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.135824350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136152034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136187998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136219990Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136253026Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136282990Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136314610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136346110Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136407094Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136440730Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136480342Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136509694Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136549930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.137378 containerd[2120]: time="2025-09-05T23:54:31.136590466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136621402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136652614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136683130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136714354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136744306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136777282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136808938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136853962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136884706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136913350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136941406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.136975474Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.137018602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.137049130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.138037 containerd[2120]: time="2025-09-05T23:54:31.137075266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:54:31.138676 containerd[2120]: time="2025-09-05T23:54:31.137315050Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.138751426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.140871262Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.141043318Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.141113002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.141176938Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:54:31.141259 containerd[2120]: time="2025-09-05T23:54:31.141217174Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:54:31.143716 containerd[2120]: time="2025-09-05T23:54:31.141769186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:54:31.147136 containerd[2120]: time="2025-09-05T23:54:31.147001138Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:54:31.147568 containerd[2120]: time="2025-09-05T23:54:31.147514870Z" level=info msg="Connect containerd service" Sep 5 23:54:31.148112 containerd[2120]: time="2025-09-05T23:54:31.148060750Z" level=info msg="using legacy CRI server" Sep 5 23:54:31.148243 containerd[2120]: time="2025-09-05T23:54:31.148216402Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:54:31.149123 containerd[2120]: time="2025-09-05T23:54:31.149017990Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.151544218Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.151893946Z" level=info msg="Start subscribing containerd event" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.151969222Z" level=info msg="Start recovering state" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.152091046Z" level=info msg="Start event monitor" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.152116366Z" level=info msg="Start snapshots syncer" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.152138938Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:54:31.154382 containerd[2120]: time="2025-09-05T23:54:31.152157178Z" level=info msg="Start streaming server" Sep 5 23:54:31.156423 containerd[2120]: time="2025-09-05T23:54:31.152348002Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:54:31.156717 containerd[2120]: time="2025-09-05T23:54:31.156669874Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:54:31.156991 containerd[2120]: time="2025-09-05T23:54:31.156936850Z" level=info msg="containerd successfully booted in 0.157483s" Sep 5 23:54:31.157092 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:54:31.227526 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:54:31.326813 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 5 23:54:31.426180 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 5 23:54:31.471233 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] Starting Core Agent Sep 5 23:54:31.471509 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [Registrar] Starting registrar module Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:31 INFO [EC2Identity] EC2 registration was successful. Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:31 INFO [CredentialRefresher] credentialRefresher has started Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:31 INFO [CredentialRefresher] Starting credentials refresher loop Sep 5 23:54:31.471798 amazon-ssm-agent[2158]: 2025-09-05 23:54:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 5 23:54:31.526621 amazon-ssm-agent[2158]: 2025-09-05 23:54:31 INFO [CredentialRefresher] Next credential rotation will be in 31.824977108 minutes Sep 5 23:54:32.522470 amazon-ssm-agent[2158]: 2025-09-05 23:54:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 5 23:54:32.622702 amazon-ssm-agent[2158]: 2025-09-05 23:54:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2327) started Sep 5 23:54:32.724217 amazon-ssm-agent[2158]: 2025-09-05 23:54:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 5 23:54:32.783883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:32.813070 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:54:32.865187 sshd_keygen[2124]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:54:32.909394 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:54:32.924879 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:54:32.932925 systemd[1]: Started sshd@0-172.31.22.173:22-139.178.68.195:36546.service - OpenSSH per-connection server daemon (139.178.68.195:36546). Sep 5 23:54:32.946189 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:54:32.947785 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:54:32.960818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:54:33.000592 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:54:33.015008 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:54:33.027980 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 23:54:33.036946 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:54:33.042388 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:54:33.046142 systemd[1]: Startup finished in 9.657s (kernel) + 9.799s (userspace) = 19.456s. Sep 5 23:54:33.150323 sshd[2356]: Accepted publickey for core from 139.178.68.195 port 36546 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:33.154776 sshd[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:33.174903 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:54:33.184189 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:54:33.190547 systemd-logind[2089]: New session 1 of user core. Sep 5 23:54:33.217943 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:54:33.230011 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:54:33.247993 (systemd)[2379]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:54:33.475448 systemd[2379]: Queued start job for default target default.target. Sep 5 23:54:33.476964 systemd[2379]: Created slice app.slice - User Application Slice. Sep 5 23:54:33.477028 systemd[2379]: Reached target paths.target - Paths. Sep 5 23:54:33.477061 systemd[2379]: Reached target timers.target - Timers. Sep 5 23:54:33.486505 systemd[2379]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:54:33.504610 systemd[2379]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:54:33.504785 systemd[2379]: Reached target sockets.target - Sockets. Sep 5 23:54:33.505838 systemd[2379]: Reached target basic.target - Basic System. Sep 5 23:54:33.505947 systemd[2379]: Reached target default.target - Main User Target. Sep 5 23:54:33.506011 systemd[2379]: Startup finished in 246ms. Sep 5 23:54:33.506731 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:54:33.518935 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:54:33.674857 systemd[1]: Started sshd@1-172.31.22.173:22-139.178.68.195:55108.service - OpenSSH per-connection server daemon (139.178.68.195:55108). Sep 5 23:54:33.861737 sshd[2391]: Accepted publickey for core from 139.178.68.195 port 55108 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:33.864942 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:33.874754 systemd-logind[2089]: New session 2 of user core. Sep 5 23:54:33.880998 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:54:34.010090 sshd[2391]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:34.016778 systemd-logind[2089]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:54:34.018279 systemd[1]: sshd@1-172.31.22.173:22-139.178.68.195:55108.service: Deactivated successfully. Sep 5 23:54:34.026588 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:54:34.029689 systemd-logind[2089]: Removed session 2. Sep 5 23:54:34.040831 systemd[1]: Started sshd@2-172.31.22.173:22-139.178.68.195:55120.service - OpenSSH per-connection server daemon (139.178.68.195:55120). Sep 5 23:54:34.159957 kubelet[2344]: E0905 23:54:34.159785 2344 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:54:34.167871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:54:34.168567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:54:34.224675 sshd[2399]: Accepted publickey for core from 139.178.68.195 port 55120 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:34.227247 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:34.236726 systemd-logind[2089]: New session 3 of user core. Sep 5 23:54:34.248969 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:54:34.368741 sshd[2399]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:34.375850 systemd[1]: sshd@2-172.31.22.173:22-139.178.68.195:55120.service: Deactivated successfully. Sep 5 23:54:34.380623 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:54:34.382378 systemd-logind[2089]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:54:34.384045 systemd-logind[2089]: Removed session 3. Sep 5 23:54:34.403014 systemd[1]: Started sshd@3-172.31.22.173:22-139.178.68.195:55130.service - OpenSSH per-connection server daemon (139.178.68.195:55130). Sep 5 23:54:34.572568 sshd[2411]: Accepted publickey for core from 139.178.68.195 port 55130 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:34.575661 sshd[2411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:34.583704 systemd-logind[2089]: New session 4 of user core. Sep 5 23:54:34.593826 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:54:34.724438 sshd[2411]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:34.731581 systemd-logind[2089]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:54:34.732842 systemd[1]: sshd@3-172.31.22.173:22-139.178.68.195:55130.service: Deactivated successfully. Sep 5 23:54:34.737193 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:54:34.739674 systemd-logind[2089]: Removed session 4. Sep 5 23:54:34.755863 systemd[1]: Started sshd@4-172.31.22.173:22-139.178.68.195:55146.service - OpenSSH per-connection server daemon (139.178.68.195:55146). Sep 5 23:54:34.932525 sshd[2419]: Accepted publickey for core from 139.178.68.195 port 55146 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:34.935561 sshd[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:34.943948 systemd-logind[2089]: New session 5 of user core. Sep 5 23:54:34.948974 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:54:35.070772 sudo[2423]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:54:35.071448 sudo[2423]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:54:35.086999 sudo[2423]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:35.111196 sshd[2419]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:35.116954 systemd-logind[2089]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:54:35.118167 systemd[1]: sshd@4-172.31.22.173:22-139.178.68.195:55146.service: Deactivated successfully. Sep 5 23:54:35.125841 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:54:35.128307 systemd-logind[2089]: Removed session 5. Sep 5 23:54:35.140891 systemd[1]: Started sshd@5-172.31.22.173:22-139.178.68.195:55150.service - OpenSSH per-connection server daemon (139.178.68.195:55150). Sep 5 23:54:35.324308 sshd[2428]: Accepted publickey for core from 139.178.68.195 port 55150 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:35.326905 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:35.336484 systemd-logind[2089]: New session 6 of user core. Sep 5 23:54:35.339874 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:54:35.447770 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:54:35.448433 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:54:35.455658 sudo[2433]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:35.465516 sudo[2432]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:54:35.466146 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:54:35.490824 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:54:35.504397 auditctl[2436]: No rules Sep 5 23:54:35.507140 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:54:35.507746 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:54:35.516048 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:54:35.563990 augenrules[2455]: No rules Sep 5 23:54:35.567738 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:54:35.570552 sudo[2432]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:35.594336 sshd[2428]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:35.602335 systemd[1]: sshd@5-172.31.22.173:22-139.178.68.195:55150.service: Deactivated successfully. Sep 5 23:54:35.607835 systemd-logind[2089]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:54:35.608111 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:54:35.610624 systemd-logind[2089]: Removed session 6. Sep 5 23:54:35.626877 systemd[1]: Started sshd@6-172.31.22.173:22-139.178.68.195:55166.service - OpenSSH per-connection server daemon (139.178.68.195:55166). Sep 5 23:54:35.796509 sshd[2464]: Accepted publickey for core from 139.178.68.195 port 55166 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:54:35.799184 sshd[2464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:35.807891 systemd-logind[2089]: New session 7 of user core. Sep 5 23:54:35.817942 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:54:35.925551 sudo[2468]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:54:35.926198 sudo[2468]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:54:36.264728 systemd-resolved[2017]: Clock change detected. Flushing caches. Sep 5 23:54:36.736036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:36.746818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:36.805805 systemd[1]: Reloading requested from client PID 2502 ('systemctl') (unit session-7.scope)... Sep 5 23:54:36.805841 systemd[1]: Reloading... Sep 5 23:54:37.029349 zram_generator::config[2543]: No configuration found. Sep 5 23:54:37.291910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:37.459872 systemd[1]: Reloading finished in 653 ms. Sep 5 23:54:37.542694 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:54:37.542906 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:54:37.544074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:37.555724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:37.866690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:37.885038 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:54:37.957849 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:54:37.960348 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:54:37.960348 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:54:37.960348 kubelet[2617]: I0905 23:54:37.958537 2617 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:54:39.538383 kubelet[2617]: I0905 23:54:39.538087 2617 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:54:39.538383 kubelet[2617]: I0905 23:54:39.538136 2617 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:54:39.539059 kubelet[2617]: I0905 23:54:39.538576 2617 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:54:39.582798 kubelet[2617]: I0905 23:54:39.582068 2617 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:54:39.606184 kubelet[2617]: E0905 23:54:39.605996 2617 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:54:39.606184 kubelet[2617]: I0905 23:54:39.606044 2617 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:54:39.613155 kubelet[2617]: I0905 23:54:39.612766 2617 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:54:39.615173 kubelet[2617]: I0905 23:54:39.615110 2617 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:54:39.615503 kubelet[2617]: I0905 23:54:39.615435 2617 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:54:39.615837 kubelet[2617]: I0905 23:54:39.615492 2617 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.22.173","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 5 23:54:39.616036 kubelet[2617]: I0905 23:54:39.615965 2617 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:54:39.616036 kubelet[2617]: I0905 23:54:39.615992 2617 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:54:39.616531 kubelet[2617]: I0905 23:54:39.616488 2617 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:54:39.621345 kubelet[2617]: I0905 23:54:39.620944 2617 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:54:39.621345 kubelet[2617]: I0905 23:54:39.620994 2617 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:54:39.621345 kubelet[2617]: I0905 23:54:39.621034 2617 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:54:39.621345 kubelet[2617]: I0905 23:54:39.621220 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:54:39.622683 kubelet[2617]: E0905 23:54:39.622648 2617 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:39.622854 kubelet[2617]: E0905 23:54:39.622833 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:39.628703 kubelet[2617]: I0905 23:54:39.628670 2617 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:54:39.631169 kubelet[2617]: I0905 23:54:39.631111 2617 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:54:39.631555 kubelet[2617]: W0905 23:54:39.631514 2617 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:54:39.633703 kubelet[2617]: I0905 23:54:39.633651 2617 server.go:1274] "Started kubelet" Sep 5 23:54:39.639679 kubelet[2617]: I0905 23:54:39.639357 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:54:39.647330 kubelet[2617]: I0905 23:54:39.647224 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:54:39.649206 kubelet[2617]: I0905 23:54:39.649149 2617 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:54:39.651126 kubelet[2617]: I0905 23:54:39.650976 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:54:39.651437 kubelet[2617]: I0905 23:54:39.651391 2617 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:54:39.653346 kubelet[2617]: I0905 23:54:39.651955 2617 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:54:39.653346 kubelet[2617]: E0905 23:54:39.652543 2617 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.22.173\" not found" Sep 5 23:54:39.653346 kubelet[2617]: I0905 23:54:39.652828 2617 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:54:39.653346 kubelet[2617]: I0905 23:54:39.652908 2617 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:54:39.661344 kubelet[2617]: E0905 23:54:39.657603 2617 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:54:39.661344 kubelet[2617]: I0905 23:54:39.658002 2617 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:54:39.661344 kubelet[2617]: I0905 23:54:39.658165 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:54:39.661344 kubelet[2617]: I0905 23:54:39.660116 2617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:54:39.663327 kubelet[2617]: E0905 23:54:39.663235 2617 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.22.173\" not found" node="172.31.22.173" Sep 5 23:54:39.666357 kubelet[2617]: I0905 23:54:39.665627 2617 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:54:39.728286 kubelet[2617]: I0905 23:54:39.728230 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:54:39.730419 kubelet[2617]: I0905 23:54:39.730363 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:54:39.730533 kubelet[2617]: I0905 23:54:39.730453 2617 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:54:39.734978 kubelet[2617]: I0905 23:54:39.734919 2617 policy_none.go:49] "None policy: Start" Sep 5 23:54:39.737465 kubelet[2617]: I0905 23:54:39.737395 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:54:39.737465 kubelet[2617]: I0905 23:54:39.737454 2617 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:54:39.751348 kubelet[2617]: I0905 23:54:39.749433 2617 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:54:39.751348 kubelet[2617]: I0905 23:54:39.749728 2617 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:54:39.751348 kubelet[2617]: I0905 23:54:39.749748 2617 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:54:39.754958 kubelet[2617]: I0905 23:54:39.754899 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:54:39.761363 kubelet[2617]: E0905 23:54:39.760804 2617 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.173\" not found" Sep 5 23:54:39.773731 kubelet[2617]: I0905 23:54:39.773648 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:54:39.776374 kubelet[2617]: I0905 23:54:39.776297 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:54:39.776620 kubelet[2617]: I0905 23:54:39.776587 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:54:39.776744 kubelet[2617]: I0905 23:54:39.776726 2617 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:54:39.777106 kubelet[2617]: E0905 23:54:39.777079 2617 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 5 23:54:39.852155 kubelet[2617]: I0905 23:54:39.851521 2617 kubelet_node_status.go:72] "Attempting to register node" node="172.31.22.173" Sep 5 23:54:39.862238 kubelet[2617]: I0905 23:54:39.862179 2617 kubelet_node_status.go:75] "Successfully registered node" node="172.31.22.173" Sep 5 23:54:39.949427 sudo[2468]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:39.974606 sshd[2464]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:39.977771 kubelet[2617]: I0905 23:54:39.977580 2617 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 5 23:54:39.978773 containerd[2120]: time="2025-09-05T23:54:39.978707178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:54:39.981018 kubelet[2617]: I0905 23:54:39.979156 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 5 23:54:39.982579 systemd[1]: sshd@6-172.31.22.173:22-139.178.68.195:55166.service: Deactivated successfully. Sep 5 23:54:39.991615 systemd-logind[2089]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:54:39.991818 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:54:39.996247 systemd-logind[2089]: Removed session 7. Sep 5 23:54:40.541929 kubelet[2617]: I0905 23:54:40.541869 2617 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 5 23:54:40.542765 kubelet[2617]: W0905 23:54:40.542071 2617 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 5 23:54:40.542765 kubelet[2617]: W0905 23:54:40.542127 2617 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 5 23:54:40.542765 kubelet[2617]: W0905 23:54:40.542171 2617 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 5 23:54:40.623400 kubelet[2617]: E0905 23:54:40.623336 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:40.623400 kubelet[2617]: I0905 23:54:40.623388 2617 apiserver.go:52] "Watching apiserver" Sep 5 23:54:40.631346 kubelet[2617]: E0905 23:54:40.629788 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:40.656092 kubelet[2617]: I0905 23:54:40.656035 2617 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:54:40.657498 kubelet[2617]: I0905 23:54:40.657464 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgbf\" (UniqueName: \"kubernetes.io/projected/194bea03-e9a9-4677-b488-2fde364ba650-kube-api-access-2fgbf\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.657691 kubelet[2617]: I0905 23:54:40.657666 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9af57f1-fd8a-41b6-bc88-120433372f08-kubelet-dir\") pod \"csi-node-driver-tfdsb\" (UID: \"d9af57f1-fd8a-41b6-bc88-120433372f08\") " pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:40.657820 kubelet[2617]: I0905 23:54:40.657798 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9af57f1-fd8a-41b6-bc88-120433372f08-registration-dir\") pod \"csi-node-driver-tfdsb\" (UID: \"d9af57f1-fd8a-41b6-bc88-120433372f08\") " pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:40.657987 kubelet[2617]: I0905 23:54:40.657961 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9af57f1-fd8a-41b6-bc88-120433372f08-socket-dir\") pod \"csi-node-driver-tfdsb\" (UID: \"d9af57f1-fd8a-41b6-bc88-120433372f08\") " pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:40.658118 kubelet[2617]: I0905 23:54:40.658096 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-cni-log-dir\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.658262 kubelet[2617]: I0905 23:54:40.658239 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-cni-net-dir\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.658423 kubelet[2617]: I0905 23:54:40.658401 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-var-lib-calico\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.658557 kubelet[2617]: I0905 23:54:40.658535 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/255e524d-5897-4b6e-92a7-5765871d1629-kube-proxy\") pod \"kube-proxy-hxdf5\" (UID: \"255e524d-5897-4b6e-92a7-5765871d1629\") " pod="kube-system/kube-proxy-hxdf5" Sep 5 23:54:40.658732 kubelet[2617]: I0905 23:54:40.658708 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-policysync\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.658876 kubelet[2617]: I0905 23:54:40.658854 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-flexvol-driver-host\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.659007 kubelet[2617]: I0905 23:54:40.658985 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-lib-modules\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.659141 kubelet[2617]: I0905 23:54:40.659119 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/194bea03-e9a9-4677-b488-2fde364ba650-tigera-ca-bundle\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.659285 kubelet[2617]: I0905 23:54:40.659262 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-var-run-calico\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.659444 kubelet[2617]: I0905 23:54:40.659421 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4888h\" (UniqueName: \"kubernetes.io/projected/d9af57f1-fd8a-41b6-bc88-120433372f08-kube-api-access-4888h\") pod \"csi-node-driver-tfdsb\" (UID: \"d9af57f1-fd8a-41b6-bc88-120433372f08\") " pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:40.659576 kubelet[2617]: I0905 23:54:40.659554 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/255e524d-5897-4b6e-92a7-5765871d1629-xtables-lock\") pod \"kube-proxy-hxdf5\" (UID: \"255e524d-5897-4b6e-92a7-5765871d1629\") " pod="kube-system/kube-proxy-hxdf5" Sep 5 23:54:40.659734 kubelet[2617]: I0905 23:54:40.659710 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jh4\" (UniqueName: \"kubernetes.io/projected/255e524d-5897-4b6e-92a7-5765871d1629-kube-api-access-d9jh4\") pod \"kube-proxy-hxdf5\" (UID: \"255e524d-5897-4b6e-92a7-5765871d1629\") " pod="kube-system/kube-proxy-hxdf5" Sep 5 23:54:40.659871 kubelet[2617]: I0905 23:54:40.659847 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-cni-bin-dir\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.660013 kubelet[2617]: I0905 23:54:40.659991 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9af57f1-fd8a-41b6-bc88-120433372f08-varrun\") pod \"csi-node-driver-tfdsb\" (UID: \"d9af57f1-fd8a-41b6-bc88-120433372f08\") " pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:40.660140 kubelet[2617]: I0905 23:54:40.660118 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/255e524d-5897-4b6e-92a7-5765871d1629-lib-modules\") pod \"kube-proxy-hxdf5\" (UID: \"255e524d-5897-4b6e-92a7-5765871d1629\") " pod="kube-system/kube-proxy-hxdf5" Sep 5 23:54:40.660357 kubelet[2617]: I0905 23:54:40.660232 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/194bea03-e9a9-4677-b488-2fde364ba650-node-certs\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.660357 kubelet[2617]: I0905 23:54:40.660272 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/194bea03-e9a9-4677-b488-2fde364ba650-xtables-lock\") pod \"calico-node-wdvzh\" (UID: \"194bea03-e9a9-4677-b488-2fde364ba650\") " pod="calico-system/calico-node-wdvzh" Sep 5 23:54:40.774786 kubelet[2617]: E0905 23:54:40.774625 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:40.774786 kubelet[2617]: W0905 23:54:40.774662 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:40.774786 kubelet[2617]: E0905 23:54:40.774695 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:40.784805 kubelet[2617]: E0905 23:54:40.784460 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:40.784805 kubelet[2617]: W0905 23:54:40.784497 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:40.784805 kubelet[2617]: E0905 23:54:40.784528 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:40.801916 kubelet[2617]: E0905 23:54:40.801478 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:40.801916 kubelet[2617]: W0905 23:54:40.801512 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:40.801916 kubelet[2617]: E0905 23:54:40.801543 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:40.812758 kubelet[2617]: E0905 23:54:40.812635 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:40.812758 kubelet[2617]: W0905 23:54:40.812671 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:40.812758 kubelet[2617]: E0905 23:54:40.812702 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:40.939362 containerd[2120]: time="2025-09-05T23:54:40.939275119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxdf5,Uid:255e524d-5897-4b6e-92a7-5765871d1629,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:40.940153 containerd[2120]: time="2025-09-05T23:54:40.939277831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wdvzh,Uid:194bea03-e9a9-4677-b488-2fde364ba650,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:41.477487 containerd[2120]: time="2025-09-05T23:54:41.477424061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:54:41.481974 containerd[2120]: time="2025-09-05T23:54:41.481885529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 23:54:41.483420 containerd[2120]: time="2025-09-05T23:54:41.483353549Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:54:41.486260 containerd[2120]: time="2025-09-05T23:54:41.486188477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:54:41.487593 containerd[2120]: time="2025-09-05T23:54:41.487496273Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:54:41.493624 containerd[2120]: time="2025-09-05T23:54:41.493536065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:54:41.495960 containerd[2120]: time="2025-09-05T23:54:41.495561929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.136822ms" Sep 5 23:54:41.499993 containerd[2120]: time="2025-09-05T23:54:41.499907622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.318775ms" Sep 5 23:54:41.623725 kubelet[2617]: E0905 23:54:41.623634 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:41.670167 containerd[2120]: time="2025-09-05T23:54:41.669692994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:41.670167 containerd[2120]: time="2025-09-05T23:54:41.669787446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:41.670167 containerd[2120]: time="2025-09-05T23:54:41.669843546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:41.670167 containerd[2120]: time="2025-09-05T23:54:41.670067334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:41.673942 containerd[2120]: time="2025-09-05T23:54:41.673470450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:41.673942 containerd[2120]: time="2025-09-05T23:54:41.673569162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:41.673942 containerd[2120]: time="2025-09-05T23:54:41.673608006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:41.675885 containerd[2120]: time="2025-09-05T23:54:41.675404922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:41.782417 kubelet[2617]: E0905 23:54:41.779611 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:41.784946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537391959.mount: Deactivated successfully. Sep 5 23:54:41.827452 systemd[1]: run-containerd-runc-k8s.io-8dd1ee6e6bae61e9cafdc6b1f76736d0c42309697fb5333af7fea4bdebf9c02d-runc.rsIKeF.mount: Deactivated successfully. Sep 5 23:54:41.888619 containerd[2120]: time="2025-09-05T23:54:41.888555739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxdf5,Uid:255e524d-5897-4b6e-92a7-5765871d1629,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd1ee6e6bae61e9cafdc6b1f76736d0c42309697fb5333af7fea4bdebf9c02d\"" Sep 5 23:54:41.897556 containerd[2120]: time="2025-09-05T23:54:41.897359431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:54:41.907698 containerd[2120]: time="2025-09-05T23:54:41.907114004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wdvzh,Uid:194bea03-e9a9-4677-b488-2fde364ba650,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\"" Sep 5 23:54:42.624336 kubelet[2617]: E0905 23:54:42.624249 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:43.246239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207233646.mount: Deactivated successfully. Sep 5 23:54:43.625427 kubelet[2617]: E0905 23:54:43.625284 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:43.778907 kubelet[2617]: E0905 23:54:43.778844 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:43.819876 containerd[2120]: time="2025-09-05T23:54:43.819821277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:43.822759 containerd[2120]: time="2025-09-05T23:54:43.822662601Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916095" Sep 5 23:54:43.823969 containerd[2120]: time="2025-09-05T23:54:43.823895385Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:43.830606 containerd[2120]: time="2025-09-05T23:54:43.830506557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:43.832022 containerd[2120]: time="2025-09-05T23:54:43.831956049Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.934400262s" Sep 5 23:54:43.832022 containerd[2120]: time="2025-09-05T23:54:43.832018125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:54:43.834204 containerd[2120]: time="2025-09-05T23:54:43.833928645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:54:43.837275 containerd[2120]: time="2025-09-05T23:54:43.837220965Z" level=info msg="CreateContainer within sandbox \"8dd1ee6e6bae61e9cafdc6b1f76736d0c42309697fb5333af7fea4bdebf9c02d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:54:43.861023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678654678.mount: Deactivated successfully. Sep 5 23:54:43.865505 containerd[2120]: time="2025-09-05T23:54:43.865262829Z" level=info msg="CreateContainer within sandbox \"8dd1ee6e6bae61e9cafdc6b1f76736d0c42309697fb5333af7fea4bdebf9c02d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8099b1b48e97d5ffce4b69b3e608dc26daa8e4f7e63af620851c0c891005f7f4\"" Sep 5 23:54:43.867324 containerd[2120]: time="2025-09-05T23:54:43.867262473Z" level=info msg="StartContainer for \"8099b1b48e97d5ffce4b69b3e608dc26daa8e4f7e63af620851c0c891005f7f4\"" Sep 5 23:54:43.983743 containerd[2120]: time="2025-09-05T23:54:43.982439170Z" level=info msg="StartContainer for \"8099b1b48e97d5ffce4b69b3e608dc26daa8e4f7e63af620851c0c891005f7f4\" returns successfully" Sep 5 23:54:44.246157 systemd[1]: run-containerd-runc-k8s.io-8099b1b48e97d5ffce4b69b3e608dc26daa8e4f7e63af620851c0c891005f7f4-runc.Nia2LI.mount: Deactivated successfully. Sep 5 23:54:44.626913 kubelet[2617]: E0905 23:54:44.626738 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:44.852721 kubelet[2617]: I0905 23:54:44.852612 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hxdf5" podStartSLOduration=3.9137262919999998 podStartE2EDuration="5.852592678s" podCreationTimestamp="2025-09-05 23:54:39 +0000 UTC" firstStartedPulling="2025-09-05 23:54:41.894879115 +0000 UTC m=+4.003645148" lastFinishedPulling="2025-09-05 23:54:43.833745501 +0000 UTC m=+5.942511534" observedRunningTime="2025-09-05 23:54:44.852554794 +0000 UTC m=+6.961320875" watchObservedRunningTime="2025-09-05 23:54:44.852592678 +0000 UTC m=+6.961358735" Sep 5 23:54:44.885300 kubelet[2617]: E0905 23:54:44.885155 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.885300 kubelet[2617]: W0905 23:54:44.885193 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.885300 kubelet[2617]: E0905 23:54:44.885225 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.886000 kubelet[2617]: E0905 23:54:44.885969 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.886084 kubelet[2617]: W0905 23:54:44.886000 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.886084 kubelet[2617]: E0905 23:54:44.886029 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.886386 kubelet[2617]: E0905 23:54:44.886356 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.886386 kubelet[2617]: W0905 23:54:44.886383 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.886608 kubelet[2617]: E0905 23:54:44.886405 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.886770 kubelet[2617]: E0905 23:54:44.886724 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.886770 kubelet[2617]: W0905 23:54:44.886750 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.886978 kubelet[2617]: E0905 23:54:44.886772 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.887102 kubelet[2617]: E0905 23:54:44.887077 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.887225 kubelet[2617]: W0905 23:54:44.887102 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.887225 kubelet[2617]: E0905 23:54:44.887125 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.887466 kubelet[2617]: E0905 23:54:44.887437 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.887526 kubelet[2617]: W0905 23:54:44.887464 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.887526 kubelet[2617]: E0905 23:54:44.887486 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.887785 kubelet[2617]: E0905 23:54:44.887761 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.887912 kubelet[2617]: W0905 23:54:44.887785 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.887912 kubelet[2617]: E0905 23:54:44.887805 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.888116 kubelet[2617]: E0905 23:54:44.888091 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.888189 kubelet[2617]: W0905 23:54:44.888116 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.888189 kubelet[2617]: E0905 23:54:44.888138 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.888526 kubelet[2617]: E0905 23:54:44.888500 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.888624 kubelet[2617]: W0905 23:54:44.888526 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.888624 kubelet[2617]: E0905 23:54:44.888548 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.888858 kubelet[2617]: E0905 23:54:44.888832 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.888933 kubelet[2617]: W0905 23:54:44.888857 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.888933 kubelet[2617]: E0905 23:54:44.888881 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.889184 kubelet[2617]: E0905 23:54:44.889159 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.889271 kubelet[2617]: W0905 23:54:44.889183 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.889271 kubelet[2617]: E0905 23:54:44.889204 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.889561 kubelet[2617]: E0905 23:54:44.889535 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.889666 kubelet[2617]: W0905 23:54:44.889561 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.889666 kubelet[2617]: E0905 23:54:44.889582 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.889937 kubelet[2617]: E0905 23:54:44.889912 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.889937 kubelet[2617]: W0905 23:54:44.889937 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.890076 kubelet[2617]: E0905 23:54:44.889962 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.890272 kubelet[2617]: E0905 23:54:44.890247 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.890378 kubelet[2617]: W0905 23:54:44.890271 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.890378 kubelet[2617]: E0905 23:54:44.890292 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.890638 kubelet[2617]: E0905 23:54:44.890612 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.890716 kubelet[2617]: W0905 23:54:44.890639 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.890716 kubelet[2617]: E0905 23:54:44.890661 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.890985 kubelet[2617]: E0905 23:54:44.890960 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.891086 kubelet[2617]: W0905 23:54:44.890985 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.891086 kubelet[2617]: E0905 23:54:44.891005 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.891413 kubelet[2617]: E0905 23:54:44.891384 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.891474 kubelet[2617]: W0905 23:54:44.891412 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.891474 kubelet[2617]: E0905 23:54:44.891436 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.891758 kubelet[2617]: E0905 23:54:44.891722 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.891758 kubelet[2617]: W0905 23:54:44.891747 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.891896 kubelet[2617]: E0905 23:54:44.891772 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.892074 kubelet[2617]: E0905 23:54:44.892050 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.892133 kubelet[2617]: W0905 23:54:44.892074 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.892133 kubelet[2617]: E0905 23:54:44.892095 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.892420 kubelet[2617]: E0905 23:54:44.892394 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.892497 kubelet[2617]: W0905 23:54:44.892420 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.892497 kubelet[2617]: E0905 23:54:44.892439 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.892871 kubelet[2617]: E0905 23:54:44.892843 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.892953 kubelet[2617]: W0905 23:54:44.892872 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.892953 kubelet[2617]: E0905 23:54:44.892895 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.893260 kubelet[2617]: E0905 23:54:44.893234 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.893346 kubelet[2617]: W0905 23:54:44.893260 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.893851 kubelet[2617]: E0905 23:54:44.893298 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.894092 kubelet[2617]: E0905 23:54:44.894059 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.894167 kubelet[2617]: W0905 23:54:44.894092 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.894167 kubelet[2617]: E0905 23:54:44.894131 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.894488 kubelet[2617]: E0905 23:54:44.894462 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.894552 kubelet[2617]: W0905 23:54:44.894488 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.894552 kubelet[2617]: E0905 23:54:44.894528 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.894875 kubelet[2617]: E0905 23:54:44.894850 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.894937 kubelet[2617]: W0905 23:54:44.894875 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.894937 kubelet[2617]: E0905 23:54:44.894914 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.895303 kubelet[2617]: E0905 23:54:44.895276 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.895393 kubelet[2617]: W0905 23:54:44.895302 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.895482 kubelet[2617]: E0905 23:54:44.895453 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.896063 kubelet[2617]: E0905 23:54:44.896034 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.896138 kubelet[2617]: W0905 23:54:44.896063 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.896138 kubelet[2617]: E0905 23:54:44.896094 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.896461 kubelet[2617]: E0905 23:54:44.896435 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.896536 kubelet[2617]: W0905 23:54:44.896461 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.896536 kubelet[2617]: E0905 23:54:44.896495 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.896818 kubelet[2617]: E0905 23:54:44.896793 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.896879 kubelet[2617]: W0905 23:54:44.896821 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.896879 kubelet[2617]: E0905 23:54:44.896859 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.897272 kubelet[2617]: E0905 23:54:44.897246 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.897383 kubelet[2617]: W0905 23:54:44.897271 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.897383 kubelet[2617]: E0905 23:54:44.897337 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.898016 kubelet[2617]: E0905 23:54:44.897988 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.898089 kubelet[2617]: W0905 23:54:44.898016 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.898089 kubelet[2617]: E0905 23:54:44.898047 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:44.898424 kubelet[2617]: E0905 23:54:44.898397 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:44.898487 kubelet[2617]: W0905 23:54:44.898423 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:44.898487 kubelet[2617]: E0905 23:54:44.898445 2617 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:45.045028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375692832.mount: Deactivated successfully. Sep 5 23:54:45.158476 containerd[2120]: time="2025-09-05T23:54:45.156991508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:45.159251 containerd[2120]: time="2025-09-05T23:54:45.159203048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 5 23:54:45.159498 containerd[2120]: time="2025-09-05T23:54:45.159463400Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:45.162749 containerd[2120]: time="2025-09-05T23:54:45.162697784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:45.164224 containerd[2120]: time="2025-09-05T23:54:45.164167976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.330183591s" Sep 5 23:54:45.164451 containerd[2120]: time="2025-09-05T23:54:45.164414468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:54:45.167970 containerd[2120]: time="2025-09-05T23:54:45.167922776Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:54:45.190618 containerd[2120]: time="2025-09-05T23:54:45.190525088Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951\"" Sep 5 23:54:45.192380 containerd[2120]: time="2025-09-05T23:54:45.191598296Z" level=info msg="StartContainer for \"b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951\"" Sep 5 23:54:45.298955 containerd[2120]: time="2025-09-05T23:54:45.298893584Z" level=info msg="StartContainer for \"b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951\" returns successfully" Sep 5 23:54:45.362156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951-rootfs.mount: Deactivated successfully. Sep 5 23:54:45.545208 containerd[2120]: time="2025-09-05T23:54:45.545040298Z" level=info msg="shim disconnected" id=b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951 namespace=k8s.io Sep 5 23:54:45.545960 containerd[2120]: time="2025-09-05T23:54:45.545523310Z" level=warning msg="cleaning up after shim disconnected" id=b72e79057eece870e8517ee9485a725e764f51564146cb3055e4439de221f951 namespace=k8s.io Sep 5 23:54:45.545960 containerd[2120]: time="2025-09-05T23:54:45.545556370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:45.627665 kubelet[2617]: E0905 23:54:45.627604 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:45.778107 kubelet[2617]: E0905 23:54:45.777659 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:45.830408 containerd[2120]: time="2025-09-05T23:54:45.829845791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:54:46.627764 kubelet[2617]: E0905 23:54:46.627709 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:47.628565 kubelet[2617]: E0905 23:54:47.628485 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:47.779769 kubelet[2617]: E0905 23:54:47.779088 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:48.510169 containerd[2120]: time="2025-09-05T23:54:48.510106272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:48.512370 containerd[2120]: time="2025-09-05T23:54:48.512035860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:54:48.512370 containerd[2120]: time="2025-09-05T23:54:48.512264592Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:48.516274 containerd[2120]: time="2025-09-05T23:54:48.516209088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:48.519063 containerd[2120]: time="2025-09-05T23:54:48.517835532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.687930581s" Sep 5 23:54:48.519063 containerd[2120]: time="2025-09-05T23:54:48.517893960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:54:48.521466 containerd[2120]: time="2025-09-05T23:54:48.521410956Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:54:48.541105 containerd[2120]: time="2025-09-05T23:54:48.541027392Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf\"" Sep 5 23:54:48.542838 containerd[2120]: time="2025-09-05T23:54:48.542785380Z" level=info msg="StartContainer for \"c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf\"" Sep 5 23:54:48.629711 kubelet[2617]: E0905 23:54:48.629622 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:48.656391 containerd[2120]: time="2025-09-05T23:54:48.656229241Z" level=info msg="StartContainer for \"c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf\" returns successfully" Sep 5 23:54:49.588705 containerd[2120]: time="2025-09-05T23:54:49.588632738Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:49.613434 kubelet[2617]: I0905 23:54:49.613012 2617 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:54:49.628912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf-rootfs.mount: Deactivated successfully. Sep 5 23:54:49.631412 kubelet[2617]: E0905 23:54:49.631350 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:49.772511 containerd[2120]: time="2025-09-05T23:54:49.772424955Z" level=error msg="collecting metrics for c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf" error="cgroups: cgroup deleted: unknown" Sep 5 23:54:49.785103 containerd[2120]: time="2025-09-05T23:54:49.784682103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfdsb,Uid:d9af57f1-fd8a-41b6-bc88-120433372f08,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:50.184919 containerd[2120]: time="2025-09-05T23:54:50.184835881Z" level=error msg="Failed to destroy network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:50.188501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39-shm.mount: Deactivated successfully. Sep 5 23:54:50.191788 containerd[2120]: time="2025-09-05T23:54:50.189068677Z" level=error msg="encountered an error cleaning up failed sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:50.191788 containerd[2120]: time="2025-09-05T23:54:50.189167701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfdsb,Uid:d9af57f1-fd8a-41b6-bc88-120433372f08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:50.192001 kubelet[2617]: E0905 23:54:50.189490 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:50.192001 kubelet[2617]: E0905 23:54:50.189576 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:50.192001 kubelet[2617]: E0905 23:54:50.189609 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tfdsb" Sep 5 23:54:50.192178 kubelet[2617]: E0905 23:54:50.189686 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tfdsb_calico-system(d9af57f1-fd8a-41b6-bc88-120433372f08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tfdsb_calico-system(d9af57f1-fd8a-41b6-bc88-120433372f08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:50.283390 containerd[2120]: time="2025-09-05T23:54:50.282991249Z" level=info msg="shim disconnected" id=c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf namespace=k8s.io Sep 5 23:54:50.283390 containerd[2120]: time="2025-09-05T23:54:50.283115149Z" level=warning msg="cleaning up after shim disconnected" id=c4289fab4206e09ea571e1bbb382a69526abef8d7f14a75bd9d8f2569063faaf namespace=k8s.io Sep 5 23:54:50.283390 containerd[2120]: time="2025-09-05T23:54:50.283136389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:50.631913 kubelet[2617]: E0905 23:54:50.631766 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:50.847294 kubelet[2617]: I0905 23:54:50.846336 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:54:50.848529 containerd[2120]: time="2025-09-05T23:54:50.847854136Z" level=info msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" Sep 5 23:54:50.848529 containerd[2120]: time="2025-09-05T23:54:50.848123848Z" level=info msg="Ensure that sandbox 6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39 in task-service has been cleanup successfully" Sep 5 23:54:50.861111 containerd[2120]: time="2025-09-05T23:54:50.861048556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:54:50.903555 containerd[2120]: time="2025-09-05T23:54:50.903125248Z" level=error msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" failed" error="failed to destroy network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:50.904016 kubelet[2617]: E0905 23:54:50.903721 2617 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:54:50.904016 kubelet[2617]: E0905 23:54:50.903819 2617 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39"} Sep 5 23:54:50.904016 kubelet[2617]: E0905 23:54:50.903902 2617 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9af57f1-fd8a-41b6-bc88-120433372f08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:50.904016 kubelet[2617]: E0905 23:54:50.903941 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9af57f1-fd8a-41b6-bc88-120433372f08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tfdsb" podUID="d9af57f1-fd8a-41b6-bc88-120433372f08" Sep 5 23:54:51.632969 kubelet[2617]: E0905 23:54:51.632903 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:52.633687 kubelet[2617]: E0905 23:54:52.633580 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:53.149382 kubelet[2617]: I0905 23:54:53.149238 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mxfg\" (UniqueName: \"kubernetes.io/projected/269c61c7-d641-41eb-a0a5-10228a03a853-kube-api-access-5mxfg\") pod \"nginx-deployment-8587fbcb89-ttvzr\" (UID: \"269c61c7-d641-41eb-a0a5-10228a03a853\") " pod="default/nginx-deployment-8587fbcb89-ttvzr" Sep 5 23:54:53.332886 containerd[2120]: time="2025-09-05T23:54:53.332707876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ttvzr,Uid:269c61c7-d641-41eb-a0a5-10228a03a853,Namespace:default,Attempt:0,}" Sep 5 23:54:53.483346 containerd[2120]: time="2025-09-05T23:54:53.482957789Z" level=error msg="Failed to destroy network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:53.487290 containerd[2120]: time="2025-09-05T23:54:53.487035473Z" level=error msg="encountered an error cleaning up failed sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:53.487647 containerd[2120]: time="2025-09-05T23:54:53.487522421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ttvzr,Uid:269c61c7-d641-41eb-a0a5-10228a03a853,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:53.488799 kubelet[2617]: E0905 23:54:53.488089 2617 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:53.488799 kubelet[2617]: E0905 23:54:53.488173 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ttvzr" Sep 5 23:54:53.488799 kubelet[2617]: E0905 23:54:53.488206 2617 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ttvzr" Sep 5 23:54:53.489138 kubelet[2617]: E0905 23:54:53.488268 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ttvzr_default(269c61c7-d641-41eb-a0a5-10228a03a853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ttvzr_default(269c61c7-d641-41eb-a0a5-10228a03a853)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ttvzr" podUID="269c61c7-d641-41eb-a0a5-10228a03a853" Sep 5 23:54:53.488825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff-shm.mount: Deactivated successfully. Sep 5 23:54:53.634650 kubelet[2617]: E0905 23:54:53.634074 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:53.869645 kubelet[2617]: I0905 23:54:53.869051 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:54:53.870833 containerd[2120]: time="2025-09-05T23:54:53.870673267Z" level=info msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" Sep 5 23:54:53.872343 containerd[2120]: time="2025-09-05T23:54:53.872274655Z" level=info msg="Ensure that sandbox 828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff in task-service has been cleanup successfully" Sep 5 23:54:53.942132 containerd[2120]: time="2025-09-05T23:54:53.942044335Z" level=error msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" failed" error="failed to destroy network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:53.942872 kubelet[2617]: E0905 23:54:53.942637 2617 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:54:53.942872 kubelet[2617]: E0905 23:54:53.942704 2617 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff"} Sep 5 23:54:53.943262 kubelet[2617]: E0905 23:54:53.942840 2617 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"269c61c7-d641-41eb-a0a5-10228a03a853\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:53.943262 kubelet[2617]: E0905 23:54:53.943168 2617 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"269c61c7-d641-41eb-a0a5-10228a03a853\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ttvzr" podUID="269c61c7-d641-41eb-a0a5-10228a03a853" Sep 5 23:54:54.635253 kubelet[2617]: E0905 23:54:54.635173 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:55.635755 kubelet[2617]: E0905 23:54:55.635643 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:56.636541 kubelet[2617]: E0905 23:54:56.636486 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:56.733863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800456783.mount: Deactivated successfully. Sep 5 23:54:56.783359 containerd[2120]: time="2025-09-05T23:54:56.782722677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:56.784507 containerd[2120]: time="2025-09-05T23:54:56.784380885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:54:56.786680 containerd[2120]: time="2025-09-05T23:54:56.785453169Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:56.790178 containerd[2120]: time="2025-09-05T23:54:56.790126317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:56.791680 containerd[2120]: time="2025-09-05T23:54:56.791462133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 5.930152253s" Sep 5 23:54:56.791680 containerd[2120]: time="2025-09-05T23:54:56.791519901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:54:56.815747 containerd[2120]: time="2025-09-05T23:54:56.815248894Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:54:56.834890 containerd[2120]: time="2025-09-05T23:54:56.834808174Z" level=info msg="CreateContainer within sandbox \"9fd7bb8e2ca4862dc9a0ef1d461ebddef02707eb812d95b2c5b3323488a2e1be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ac1a8266c36c06dc6445d1653f62716c58f158d0fc8988b8c31ff2000e699dad\"" Sep 5 23:54:56.837351 containerd[2120]: time="2025-09-05T23:54:56.836508994Z" level=info msg="StartContainer for \"ac1a8266c36c06dc6445d1653f62716c58f158d0fc8988b8c31ff2000e699dad\"" Sep 5 23:54:56.942430 containerd[2120]: time="2025-09-05T23:54:56.942273154Z" level=info msg="StartContainer for \"ac1a8266c36c06dc6445d1653f62716c58f158d0fc8988b8c31ff2000e699dad\" returns successfully" Sep 5 23:54:57.192045 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:54:57.192177 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:54:57.638581 kubelet[2617]: E0905 23:54:57.638510 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:57.926855 kubelet[2617]: I0905 23:54:57.926743 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wdvzh" podStartSLOduration=4.043346106 podStartE2EDuration="18.926685227s" podCreationTimestamp="2025-09-05 23:54:39 +0000 UTC" firstStartedPulling="2025-09-05 23:54:41.909854708 +0000 UTC m=+4.018620753" lastFinishedPulling="2025-09-05 23:54:56.793193829 +0000 UTC m=+18.901959874" observedRunningTime="2025-09-05 23:54:57.926426267 +0000 UTC m=+20.035192336" watchObservedRunningTime="2025-09-05 23:54:57.926685227 +0000 UTC m=+20.035451272" Sep 5 23:54:58.638721 kubelet[2617]: E0905 23:54:58.638669 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:59.361413 kernel: bpftool[3424]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:54:59.621371 kubelet[2617]: E0905 23:54:59.621189 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:59.640104 kubelet[2617]: E0905 23:54:59.639792 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:54:59.664891 (udev-worker)[3440]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:59.668040 systemd-networkd[1689]: vxlan.calico: Link UP Sep 5 23:54:59.668048 systemd-networkd[1689]: vxlan.calico: Gained carrier Sep 5 23:54:59.719400 (udev-worker)[3235]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:55:00.640969 kubelet[2617]: E0905 23:55:00.640905 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:00.805194 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 5 23:55:00.940005 systemd-networkd[1689]: vxlan.calico: Gained IPv6LL Sep 5 23:55:01.641857 kubelet[2617]: E0905 23:55:01.641784 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:02.642984 kubelet[2617]: E0905 23:55:02.642919 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:02.778809 containerd[2120]: time="2025-09-05T23:55:02.778657743Z" level=info msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.872 [INFO][3511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.872 [INFO][3511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" iface="eth0" netns="/var/run/netns/cni-c9676336-967f-c366-4d70-43dba14235d1" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.874 [INFO][3511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" iface="eth0" netns="/var/run/netns/cni-c9676336-967f-c366-4d70-43dba14235d1" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.876 [INFO][3511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" iface="eth0" netns="/var/run/netns/cni-c9676336-967f-c366-4d70-43dba14235d1" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.876 [INFO][3511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.876 [INFO][3511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.947 [INFO][3518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.948 [INFO][3518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.948 [INFO][3518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.963 [WARNING][3518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.963 [INFO][3518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.965 [INFO][3518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:02.974031 containerd[2120]: 2025-09-05 23:55:02.971 [INFO][3511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:02.976500 containerd[2120]: time="2025-09-05T23:55:02.976438588Z" level=info msg="TearDown network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" successfully" Sep 5 23:55:02.976500 containerd[2120]: time="2025-09-05T23:55:02.976493368Z" level=info msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" returns successfully" Sep 5 23:55:02.978444 systemd[1]: run-netns-cni\x2dc9676336\x2d967f\x2dc366\x2d4d70\x2d43dba14235d1.mount: Deactivated successfully. Sep 5 23:55:02.979932 containerd[2120]: time="2025-09-05T23:55:02.979743196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfdsb,Uid:d9af57f1-fd8a-41b6-bc88-120433372f08,Namespace:calico-system,Attempt:1,}" Sep 5 23:55:03.229008 systemd-networkd[1689]: cali6aad551e82b: Link UP Sep 5 23:55:03.231204 systemd-networkd[1689]: cali6aad551e82b: Gained carrier Sep 5 23:55:03.238157 (udev-worker)[3545]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.070 [INFO][3527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.173-k8s-csi--node--driver--tfdsb-eth0 csi-node-driver- calico-system d9af57f1-fd8a-41b6-bc88-120433372f08 1206 0 2025-09-05 23:54:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.22.173 csi-node-driver-tfdsb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6aad551e82b [] [] }} ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.070 [INFO][3527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.119 [INFO][3539] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" HandleID="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.119 [INFO][3539] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" HandleID="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b140), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.22.173", "pod":"csi-node-driver-tfdsb", "timestamp":"2025-09-05 23:55:03.119046121 +0000 UTC"}, Hostname:"172.31.22.173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.119 [INFO][3539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.119 [INFO][3539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.119 [INFO][3539] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.173' Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.141 [INFO][3539] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.150 [INFO][3539] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.159 [INFO][3539] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.170 [INFO][3539] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.176 [INFO][3539] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.176 [INFO][3539] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.189 [INFO][3539] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0 Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.199 [INFO][3539] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.220 [INFO][3539] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.1/26] block=192.168.13.0/26 handle="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.220 [INFO][3539] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.1/26] handle="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" host="172.31.22.173" Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.220 [INFO][3539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:03.274487 containerd[2120]: 2025-09-05 23:55:03.220 [INFO][3539] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.1/26] IPv6=[] ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" HandleID="k8s-pod-network.0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.223 [INFO][3527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-csi--node--driver--tfdsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af57f1-fd8a-41b6-bc88-120433372f08", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"", Pod:"csi-node-driver-tfdsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aad551e82b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.223 [INFO][3527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.1/32] ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.223 [INFO][3527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aad551e82b ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.231 [INFO][3527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.232 [INFO][3527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-csi--node--driver--tfdsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af57f1-fd8a-41b6-bc88-120433372f08", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0", Pod:"csi-node-driver-tfdsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aad551e82b", MAC:"3a:14:82:d5:e0:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.276693 containerd[2120]: 2025-09-05 23:55:03.271 [INFO][3527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0" Namespace="calico-system" Pod="csi-node-driver-tfdsb" WorkloadEndpoint="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:03.301871 containerd[2120]: time="2025-09-05T23:55:03.301669706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:03.301871 containerd[2120]: time="2025-09-05T23:55:03.301770530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:03.301871 containerd[2120]: time="2025-09-05T23:55:03.301816790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:03.302280 containerd[2120]: time="2025-09-05T23:55:03.302004230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:03.366552 containerd[2120]: time="2025-09-05T23:55:03.366169874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tfdsb,Uid:d9af57f1-fd8a-41b6-bc88-120433372f08,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0\"" Sep 5 23:55:03.373657 containerd[2120]: time="2025-09-05T23:55:03.373426466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:55:03.644116 kubelet[2617]: E0905 23:55:03.643959 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:04.644269 kubelet[2617]: E0905 23:55:04.644218 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:04.675877 containerd[2120]: time="2025-09-05T23:55:04.674505173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.675877 containerd[2120]: time="2025-09-05T23:55:04.675829685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:55:04.676782 containerd[2120]: time="2025-09-05T23:55:04.676736441Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.680498 containerd[2120]: time="2025-09-05T23:55:04.680442557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:04.682005 containerd[2120]: time="2025-09-05T23:55:04.681959417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.308456871s" Sep 5 23:55:04.682152 containerd[2120]: time="2025-09-05T23:55:04.682122509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:55:04.686419 containerd[2120]: time="2025-09-05T23:55:04.686354921Z" level=info msg="CreateContainer within sandbox \"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:55:04.706467 containerd[2120]: time="2025-09-05T23:55:04.706375853Z" level=info msg="CreateContainer within sandbox \"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ab94b513ce8b125fd44b0939521565014f1d335585e2e49678a79828c86ddccd\"" Sep 5 23:55:04.708011 containerd[2120]: time="2025-09-05T23:55:04.707957381Z" level=info msg="StartContainer for \"ab94b513ce8b125fd44b0939521565014f1d335585e2e49678a79828c86ddccd\"" Sep 5 23:55:04.767321 systemd[1]: run-containerd-runc-k8s.io-ab94b513ce8b125fd44b0939521565014f1d335585e2e49678a79828c86ddccd-runc.Bc8zlu.mount: Deactivated successfully. Sep 5 23:55:04.827243 containerd[2120]: time="2025-09-05T23:55:04.827115629Z" level=info msg="StartContainer for \"ab94b513ce8b125fd44b0939521565014f1d335585e2e49678a79828c86ddccd\" returns successfully" Sep 5 23:55:04.831856 containerd[2120]: time="2025-09-05T23:55:04.830142305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:55:05.099832 systemd-networkd[1689]: cali6aad551e82b: Gained IPv6LL Sep 5 23:55:05.645262 kubelet[2617]: E0905 23:55:05.645189 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:06.211465 containerd[2120]: time="2025-09-05T23:55:06.211398592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:06.212964 containerd[2120]: time="2025-09-05T23:55:06.212909020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:55:06.214504 containerd[2120]: time="2025-09-05T23:55:06.213879628Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:06.217738 containerd[2120]: time="2025-09-05T23:55:06.217657804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:06.219299 containerd[2120]: time="2025-09-05T23:55:06.219247684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.389039751s" Sep 5 23:55:06.219516 containerd[2120]: time="2025-09-05T23:55:06.219482884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:55:06.223831 containerd[2120]: time="2025-09-05T23:55:06.223757944Z" level=info msg="CreateContainer within sandbox \"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:55:06.244359 containerd[2120]: time="2025-09-05T23:55:06.243812632Z" level=info msg="CreateContainer within sandbox \"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebe9c86ecc07d7e93334cfdf654ad052c0fd31d7079f6ec328795c45b673b134\"" Sep 5 23:55:06.244920 containerd[2120]: time="2025-09-05T23:55:06.244820320Z" level=info msg="StartContainer for \"ebe9c86ecc07d7e93334cfdf654ad052c0fd31d7079f6ec328795c45b673b134\"" Sep 5 23:55:06.348575 containerd[2120]: time="2025-09-05T23:55:06.348364565Z" level=info msg="StartContainer for \"ebe9c86ecc07d7e93334cfdf654ad052c0fd31d7079f6ec328795c45b673b134\" returns successfully" Sep 5 23:55:06.646100 kubelet[2617]: E0905 23:55:06.645923 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:06.773362 kubelet[2617]: I0905 23:55:06.773028 2617 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:55:06.773362 kubelet[2617]: I0905 23:55:06.773072 2617 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:55:06.950329 kubelet[2617]: I0905 23:55:06.950176 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tfdsb" podStartSLOduration=25.100671554 podStartE2EDuration="27.95015312s" podCreationTimestamp="2025-09-05 23:54:39 +0000 UTC" firstStartedPulling="2025-09-05 23:55:03.371590958 +0000 UTC m=+25.480357003" lastFinishedPulling="2025-09-05 23:55:06.221072536 +0000 UTC m=+28.329838569" observedRunningTime="2025-09-05 23:55:06.947721308 +0000 UTC m=+29.056487365" watchObservedRunningTime="2025-09-05 23:55:06.95015312 +0000 UTC m=+29.058919177" Sep 5 23:55:07.264585 ntpd[2081]: Listen normally on 6 vxlan.calico 192.168.13.0:123 Sep 5 23:55:07.265507 ntpd[2081]: 5 Sep 23:55:07 ntpd[2081]: Listen normally on 6 vxlan.calico 192.168.13.0:123 Sep 5 23:55:07.265507 ntpd[2081]: 5 Sep 23:55:07 ntpd[2081]: Listen normally on 7 vxlan.calico [fe80::6431:cff:fe5f:3912%3]:123 Sep 5 23:55:07.265507 ntpd[2081]: 5 Sep 23:55:07 ntpd[2081]: Listen normally on 8 cali6aad551e82b [fe80::ecee:eeff:feee:eeee%6]:123 Sep 5 23:55:07.264708 ntpd[2081]: Listen normally on 7 vxlan.calico [fe80::6431:cff:fe5f:3912%3]:123 Sep 5 23:55:07.264787 ntpd[2081]: Listen normally on 8 cali6aad551e82b [fe80::ecee:eeff:feee:eeee%6]:123 Sep 5 23:55:07.646441 kubelet[2617]: E0905 23:55:07.646271 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:07.779067 containerd[2120]: time="2025-09-05T23:55:07.778793468Z" level=info msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.848 [INFO][3695] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.849 [INFO][3695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" iface="eth0" netns="/var/run/netns/cni-d95bb051-64d9-b56c-2d18-46bdec0f6223" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.849 [INFO][3695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" iface="eth0" netns="/var/run/netns/cni-d95bb051-64d9-b56c-2d18-46bdec0f6223" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.850 [INFO][3695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" iface="eth0" netns="/var/run/netns/cni-d95bb051-64d9-b56c-2d18-46bdec0f6223" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.850 [INFO][3695] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.850 [INFO][3695] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.884 [INFO][3702] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.884 [INFO][3702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.884 [INFO][3702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.897 [WARNING][3702] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.897 [INFO][3702] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.901 [INFO][3702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:07.906594 containerd[2120]: 2025-09-05 23:55:07.903 [INFO][3695] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:07.908454 containerd[2120]: time="2025-09-05T23:55:07.908393517Z" level=info msg="TearDown network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" successfully" Sep 5 23:55:07.908454 containerd[2120]: time="2025-09-05T23:55:07.908448981Z" level=info msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" returns successfully" Sep 5 23:55:07.909447 containerd[2120]: time="2025-09-05T23:55:07.909386745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ttvzr,Uid:269c61c7-d641-41eb-a0a5-10228a03a853,Namespace:default,Attempt:1,}" Sep 5 23:55:07.914936 systemd[1]: run-netns-cni\x2dd95bb051\x2d64d9\x2db56c\x2d2d18\x2d46bdec0f6223.mount: Deactivated successfully. Sep 5 23:55:08.105917 systemd-networkd[1689]: cali147836089cb: Link UP Sep 5 23:55:08.109605 systemd-networkd[1689]: cali147836089cb: Gained carrier Sep 5 23:55:08.122281 (udev-worker)[3729]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:07.992 [INFO][3709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0 nginx-deployment-8587fbcb89- default 269c61c7-d641-41eb-a0a5-10228a03a853 1249 0 2025-09-05 23:54:52 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.173 nginx-deployment-8587fbcb89-ttvzr eth0 default [] [] [kns.default ksa.default.default] cali147836089cb [] [] }} ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:07.992 [INFO][3709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.035 [INFO][3721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" HandleID="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.036 [INFO][3721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" HandleID="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a1680), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.173", "pod":"nginx-deployment-8587fbcb89-ttvzr", "timestamp":"2025-09-05 23:55:08.035787065 +0000 UTC"}, Hostname:"172.31.22.173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.036 [INFO][3721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.036 [INFO][3721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.036 [INFO][3721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.173' Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.050 [INFO][3721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.057 [INFO][3721] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.064 [INFO][3721] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.068 [INFO][3721] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.073 [INFO][3721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.073 [INFO][3721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.076 [INFO][3721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52 Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.085 [INFO][3721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.097 [INFO][3721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.2/26] block=192.168.13.0/26 handle="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.097 [INFO][3721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.2/26] handle="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" host="172.31.22.173" Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.097 [INFO][3721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:08.143221 containerd[2120]: 2025-09-05 23:55:08.097 [INFO][3721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.2/26] IPv6=[] ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" HandleID="k8s-pod-network.e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.100 [INFO][3709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"269c61c7-d641-41eb-a0a5-10228a03a853", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-ttvzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147836089cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.100 [INFO][3709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.2/32] ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.100 [INFO][3709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali147836089cb ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.120 [INFO][3709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.121 [INFO][3709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"269c61c7-d641-41eb-a0a5-10228a03a853", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52", Pod:"nginx-deployment-8587fbcb89-ttvzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147836089cb", MAC:"46:9a:9b:93:13:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:08.144764 containerd[2120]: 2025-09-05 23:55:08.133 [INFO][3709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52" Namespace="default" Pod="nginx-deployment-8587fbcb89-ttvzr" WorkloadEndpoint="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:08.183075 containerd[2120]: time="2025-09-05T23:55:08.182712162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:08.183414 containerd[2120]: time="2025-09-05T23:55:08.182915034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:08.185572 containerd[2120]: time="2025-09-05T23:55:08.185484558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:08.186083 containerd[2120]: time="2025-09-05T23:55:08.185962530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:08.286541 containerd[2120]: time="2025-09-05T23:55:08.286443715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ttvzr,Uid:269c61c7-d641-41eb-a0a5-10228a03a853,Namespace:default,Attempt:1,} returns sandbox id \"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52\"" Sep 5 23:55:08.290495 containerd[2120]: time="2025-09-05T23:55:08.290123947Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 5 23:55:08.647596 kubelet[2617]: E0905 23:55:08.647444 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:08.910085 systemd[1]: run-containerd-runc-k8s.io-e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52-runc.Wf0VzL.mount: Deactivated successfully. Sep 5 23:55:09.452769 systemd-networkd[1689]: cali147836089cb: Gained IPv6LL Sep 5 23:55:09.649616 kubelet[2617]: E0905 23:55:09.649118 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:10.649953 kubelet[2617]: E0905 23:55:10.649885 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:11.485915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866330738.mount: Deactivated successfully. Sep 5 23:55:11.650134 kubelet[2617]: E0905 23:55:11.650087 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:12.264886 ntpd[2081]: Listen normally on 9 cali147836089cb [fe80::ecee:eeff:feee:eeee%7]:123 Sep 5 23:55:12.265807 ntpd[2081]: 5 Sep 23:55:12 ntpd[2081]: Listen normally on 9 cali147836089cb [fe80::ecee:eeff:feee:eeee%7]:123 Sep 5 23:55:12.651517 kubelet[2617]: E0905 23:55:12.651282 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:12.911364 containerd[2120]: time="2025-09-05T23:55:12.911033858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:12.912925 containerd[2120]: time="2025-09-05T23:55:12.912384986Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69986522" Sep 5 23:55:12.915797 containerd[2120]: time="2025-09-05T23:55:12.914872394Z" level=info msg="ImageCreate event name:\"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:12.922266 containerd[2120]: time="2025-09-05T23:55:12.922190018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:12.926774 containerd[2120]: time="2025-09-05T23:55:12.926687630Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 4.636505147s" Sep 5 23:55:12.926774 containerd[2120]: time="2025-09-05T23:55:12.926761262Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 5 23:55:12.931389 containerd[2120]: time="2025-09-05T23:55:12.931215746Z" level=info msg="CreateContainer within sandbox \"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 5 23:55:12.956216 containerd[2120]: time="2025-09-05T23:55:12.956138318Z" level=info msg="CreateContainer within sandbox \"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"caccbe1cc050b82694717a230e1b483cac4d9f94982bc3b21832589e4942e8ce\"" Sep 5 23:55:12.957249 containerd[2120]: time="2025-09-05T23:55:12.957033554Z" level=info msg="StartContainer for \"caccbe1cc050b82694717a230e1b483cac4d9f94982bc3b21832589e4942e8ce\"" Sep 5 23:55:13.054840 containerd[2120]: time="2025-09-05T23:55:13.054624202Z" level=info msg="StartContainer for \"caccbe1cc050b82694717a230e1b483cac4d9f94982bc3b21832589e4942e8ce\" returns successfully" Sep 5 23:55:13.652530 kubelet[2617]: E0905 23:55:13.652453 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:13.974112 kubelet[2617]: I0905 23:55:13.974034 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ttvzr" podStartSLOduration=17.334636304 podStartE2EDuration="21.974013363s" podCreationTimestamp="2025-09-05 23:54:52 +0000 UTC" firstStartedPulling="2025-09-05 23:55:08.289335307 +0000 UTC m=+30.398101376" lastFinishedPulling="2025-09-05 23:55:12.92871239 +0000 UTC m=+35.037478435" observedRunningTime="2025-09-05 23:55:13.973564011 +0000 UTC m=+36.082330056" watchObservedRunningTime="2025-09-05 23:55:13.974013363 +0000 UTC m=+36.082779396" Sep 5 23:55:14.233549 update_engine[2094]: I20250905 23:55:14.233342 2094 update_attempter.cc:509] Updating boot flags... Sep 5 23:55:14.307441 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3888) Sep 5 23:55:14.557360 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3879) Sep 5 23:55:14.653105 kubelet[2617]: E0905 23:55:14.653020 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:14.791495 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3879) Sep 5 23:55:15.653660 kubelet[2617]: E0905 23:55:15.653585 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:16.654399 kubelet[2617]: E0905 23:55:16.654335 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:17.654906 kubelet[2617]: E0905 23:55:17.654844 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:18.655396 kubelet[2617]: E0905 23:55:18.655337 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:19.621320 kubelet[2617]: E0905 23:55:19.621253 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:19.655988 kubelet[2617]: E0905 23:55:19.655939 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:20.657101 kubelet[2617]: E0905 23:55:20.657033 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:21.657596 kubelet[2617]: E0905 23:55:21.657536 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:22.549888 kubelet[2617]: I0905 23:55:22.549763 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d252519b-b52c-4a9e-8ecd-2c5f4c799615-data\") pod \"nfs-server-provisioner-0\" (UID: \"d252519b-b52c-4a9e-8ecd-2c5f4c799615\") " pod="default/nfs-server-provisioner-0" Sep 5 23:55:22.549888 kubelet[2617]: I0905 23:55:22.549840 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2pc\" (UniqueName: \"kubernetes.io/projected/d252519b-b52c-4a9e-8ecd-2c5f4c799615-kube-api-access-qq2pc\") pod \"nfs-server-provisioner-0\" (UID: \"d252519b-b52c-4a9e-8ecd-2c5f4c799615\") " pod="default/nfs-server-provisioner-0" Sep 5 23:55:22.658655 kubelet[2617]: E0905 23:55:22.658588 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:22.695704 containerd[2120]: time="2025-09-05T23:55:22.693543730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d252519b-b52c-4a9e-8ecd-2c5f4c799615,Namespace:default,Attempt:0,}" Sep 5 23:55:22.960683 systemd-networkd[1689]: cali60e51b789ff: Link UP Sep 5 23:55:22.961153 systemd-networkd[1689]: cali60e51b789ff: Gained carrier Sep 5 23:55:22.966131 (udev-worker)[4196]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.784 [INFO][4177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.173-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d252519b-b52c-4a9e-8ecd-2c5f4c799615 1310 0 2025-09-05 23:55:22 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.22.173 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.785 [INFO][4177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.843 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" HandleID="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Workload="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.843 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" HandleID="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Workload="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.173", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-05 23:55:22.843688331 +0000 UTC"}, Hostname:"172.31.22.173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.844 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.844 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.844 [INFO][4188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.173' Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.864 [INFO][4188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.878 [INFO][4188] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.898 [INFO][4188] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.904 [INFO][4188] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.920 [INFO][4188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.920 [INFO][4188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.926 [INFO][4188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4 Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.935 [INFO][4188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.951 [INFO][4188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.3/26] block=192.168.13.0/26 handle="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.951 [INFO][4188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.3/26] handle="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" host="172.31.22.173" Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.951 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:22.991495 containerd[2120]: 2025-09-05 23:55:22.951 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.3/26] IPv6=[] ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" HandleID="k8s-pod-network.533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Workload="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.993544 containerd[2120]: 2025-09-05 23:55:22.954 [INFO][4177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d252519b-b52c-4a9e-8ecd-2c5f4c799615", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:22.993544 containerd[2120]: 2025-09-05 23:55:22.954 [INFO][4177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.3/32] ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.993544 containerd[2120]: 2025-09-05 23:55:22.955 [INFO][4177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.993544 containerd[2120]: 2025-09-05 23:55:22.959 [INFO][4177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:22.994277 containerd[2120]: 2025-09-05 23:55:22.960 [INFO][4177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d252519b-b52c-4a9e-8ecd-2c5f4c799615", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ea:d5:77:9d:ed:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:22.994277 containerd[2120]: 2025-09-05 23:55:22.988 [INFO][4177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.22.173-k8s-nfs--server--provisioner--0-eth0" Sep 5 23:55:23.031589 containerd[2120]: time="2025-09-05T23:55:23.030548144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:23.040036 containerd[2120]: time="2025-09-05T23:55:23.031652984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:23.040036 containerd[2120]: time="2025-09-05T23:55:23.031723520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:23.040036 containerd[2120]: time="2025-09-05T23:55:23.031916324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:23.132346 containerd[2120]: time="2025-09-05T23:55:23.132258500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d252519b-b52c-4a9e-8ecd-2c5f4c799615,Namespace:default,Attempt:0,} returns sandbox id \"533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4\"" Sep 5 23:55:23.134772 containerd[2120]: time="2025-09-05T23:55:23.134720912Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 5 23:55:23.661355 kubelet[2617]: E0905 23:55:23.660430 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:24.661278 kubelet[2617]: E0905 23:55:24.661198 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:24.812502 systemd-networkd[1689]: cali60e51b789ff: Gained IPv6LL Sep 5 23:55:25.661454 kubelet[2617]: E0905 23:55:25.661390 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:25.725994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360073298.mount: Deactivated successfully. Sep 5 23:55:26.662133 kubelet[2617]: E0905 23:55:26.661985 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:27.264706 ntpd[2081]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:55:27.265268 ntpd[2081]: 5 Sep 23:55:27 ntpd[2081]: Listen normally on 10 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:55:27.663098 kubelet[2617]: E0905 23:55:27.663030 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:28.664488 kubelet[2617]: E0905 23:55:28.664438 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:28.743380 containerd[2120]: time="2025-09-05T23:55:28.742688656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.745360 containerd[2120]: time="2025-09-05T23:55:28.745033888Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Sep 5 23:55:28.747659 containerd[2120]: time="2025-09-05T23:55:28.747540184Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.755352 containerd[2120]: time="2025-09-05T23:55:28.755246608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:28.758592 containerd[2120]: time="2025-09-05T23:55:28.757241032Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.622458512s" Sep 5 23:55:28.758592 containerd[2120]: time="2025-09-05T23:55:28.757343644Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 5 23:55:28.761985 containerd[2120]: time="2025-09-05T23:55:28.761907700Z" level=info msg="CreateContainer within sandbox \"533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 5 23:55:28.788891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108397189.mount: Deactivated successfully. Sep 5 23:55:28.797428 containerd[2120]: time="2025-09-05T23:55:28.797218228Z" level=info msg="CreateContainer within sandbox \"533ee8fd2d2662e1a5f760ff8e3f32b05c678393e8c18beca5ad2871f21af2b4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"79a98a4726ac28a217b60bff0c9916994443190763ffc168f8844ad43c8e8915\"" Sep 5 23:55:28.798947 containerd[2120]: time="2025-09-05T23:55:28.798575704Z" level=info msg="StartContainer for \"79a98a4726ac28a217b60bff0c9916994443190763ffc168f8844ad43c8e8915\"" Sep 5 23:55:28.901351 containerd[2120]: time="2025-09-05T23:55:28.900118301Z" level=info msg="StartContainer for \"79a98a4726ac28a217b60bff0c9916994443190763ffc168f8844ad43c8e8915\" returns successfully" Sep 5 23:55:29.666134 kubelet[2617]: E0905 23:55:29.666066 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:30.667220 kubelet[2617]: E0905 23:55:30.667160 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:31.667712 kubelet[2617]: E0905 23:55:31.667646 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:32.668324 kubelet[2617]: E0905 23:55:32.668250 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:33.668870 kubelet[2617]: E0905 23:55:33.668809 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:34.669007 kubelet[2617]: E0905 23:55:34.668938 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:35.669876 kubelet[2617]: E0905 23:55:35.669805 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:36.670608 kubelet[2617]: E0905 23:55:36.670545 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:37.671665 kubelet[2617]: E0905 23:55:37.671599 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:38.672016 kubelet[2617]: E0905 23:55:38.671948 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:39.622179 kubelet[2617]: E0905 23:55:39.622122 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:39.657364 containerd[2120]: time="2025-09-05T23:55:39.657216686Z" level=info msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" Sep 5 23:55:39.672692 kubelet[2617]: E0905 23:55:39.672550 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.718 [WARNING][4363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"269c61c7-d641-41eb-a0a5-10228a03a853", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52", Pod:"nginx-deployment-8587fbcb89-ttvzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147836089cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.719 [INFO][4363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.719 [INFO][4363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" iface="eth0" netns="" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.719 [INFO][4363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.720 [INFO][4363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.759 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.759 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.759 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.774 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.774 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.776 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:39.785771 containerd[2120]: 2025-09-05 23:55:39.782 [INFO][4363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.785771 containerd[2120]: time="2025-09-05T23:55:39.785595831Z" level=info msg="TearDown network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" successfully" Sep 5 23:55:39.785771 containerd[2120]: time="2025-09-05T23:55:39.785634363Z" level=info msg="StopPodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" returns successfully" Sep 5 23:55:39.789439 containerd[2120]: time="2025-09-05T23:55:39.789358719Z" level=info msg="RemovePodSandbox for \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" Sep 5 23:55:39.789439 containerd[2120]: time="2025-09-05T23:55:39.789433755Z" level=info msg="Forcibly stopping sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\"" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.848 [WARNING][4387] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"269c61c7-d641-41eb-a0a5-10228a03a853", ResourceVersion:"1269", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"e4230599266fc6da5114b8d30a99381c11f8e347b721544fda15d379d4ce7e52", Pod:"nginx-deployment-8587fbcb89-ttvzr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali147836089cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.848 [INFO][4387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.848 [INFO][4387] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" iface="eth0" netns="" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.848 [INFO][4387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.848 [INFO][4387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.883 [INFO][4394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.883 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.883 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.896 [WARNING][4394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.897 [INFO][4394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" HandleID="k8s-pod-network.828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Workload="172.31.22.173-k8s-nginx--deployment--8587fbcb89--ttvzr-eth0" Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.899 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:39.904424 containerd[2120]: 2025-09-05 23:55:39.901 [INFO][4387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff" Sep 5 23:55:39.905699 containerd[2120]: time="2025-09-05T23:55:39.904478560Z" level=info msg="TearDown network for sandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" successfully" Sep 5 23:55:39.910975 containerd[2120]: time="2025-09-05T23:55:39.910894480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:39.911357 containerd[2120]: time="2025-09-05T23:55:39.910983088Z" level=info msg="RemovePodSandbox \"828237b29498d3befc1b42c0bd47039f6c50b933b7c805c2f7d40e85ce0ff1ff\" returns successfully" Sep 5 23:55:39.912013 containerd[2120]: time="2025-09-05T23:55:39.911900104Z" level=info msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.008 [WARNING][4408] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-csi--node--driver--tfdsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af57f1-fd8a-41b6-bc88-120433372f08", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0", Pod:"csi-node-driver-tfdsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aad551e82b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.009 [INFO][4408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.009 [INFO][4408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" iface="eth0" netns="" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.009 [INFO][4408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.009 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.047 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.047 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.047 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.060 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.060 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.062 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:40.067732 containerd[2120]: 2025-09-05 23:55:40.065 [INFO][4408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.067732 containerd[2120]: time="2025-09-05T23:55:40.067566144Z" level=info msg="TearDown network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" successfully" Sep 5 23:55:40.067732 containerd[2120]: time="2025-09-05T23:55:40.067603464Z" level=info msg="StopPodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" returns successfully" Sep 5 23:55:40.069774 containerd[2120]: time="2025-09-05T23:55:40.069188424Z" level=info msg="RemovePodSandbox for \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" Sep 5 23:55:40.069774 containerd[2120]: time="2025-09-05T23:55:40.069240432Z" level=info msg="Forcibly stopping sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\"" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.173 [WARNING][4436] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-csi--node--driver--tfdsb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af57f1-fd8a-41b6-bc88-120433372f08", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"0ea2e2b4ef0f9007b12f003c6624f2af85090dea85a75b97ef4457976ed052d0", Pod:"csi-node-driver-tfdsb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6aad551e82b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.173 [INFO][4436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.173 [INFO][4436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" iface="eth0" netns="" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.173 [INFO][4436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.173 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.207 [INFO][4444] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.208 [INFO][4444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.208 [INFO][4444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.221 [WARNING][4444] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.222 [INFO][4444] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" HandleID="k8s-pod-network.6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Workload="172.31.22.173-k8s-csi--node--driver--tfdsb-eth0" Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.224 [INFO][4444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:40.229032 containerd[2120]: 2025-09-05 23:55:40.226 [INFO][4436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39" Sep 5 23:55:40.229916 containerd[2120]: time="2025-09-05T23:55:40.229130881Z" level=info msg="TearDown network for sandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" successfully" Sep 5 23:55:40.236108 containerd[2120]: time="2025-09-05T23:55:40.235880221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:40.236108 containerd[2120]: time="2025-09-05T23:55:40.235956001Z" level=info msg="RemovePodSandbox \"6bf18df0d094cc357824a357389e65b9332beb98d25f4c289693aafb42dc6a39\" returns successfully" Sep 5 23:55:40.673629 kubelet[2617]: E0905 23:55:40.673573 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:41.674908 kubelet[2617]: E0905 23:55:41.674518 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:42.675718 kubelet[2617]: E0905 23:55:42.675651 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:43.675899 kubelet[2617]: E0905 23:55:43.675808 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:44.676227 kubelet[2617]: E0905 23:55:44.676167 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:45.676765 kubelet[2617]: E0905 23:55:45.676685 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:46.677884 kubelet[2617]: E0905 23:55:46.677825 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:47.678825 kubelet[2617]: E0905 23:55:47.678763 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:48.679319 kubelet[2617]: E0905 23:55:48.679245 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:49.679613 kubelet[2617]: E0905 23:55:49.679536 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:50.680493 kubelet[2617]: E0905 23:55:50.680430 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:51.680883 kubelet[2617]: E0905 23:55:51.680818 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:52.681845 kubelet[2617]: E0905 23:55:52.681786 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:53.428561 kubelet[2617]: I0905 23:55:53.428471 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=25.803110979 podStartE2EDuration="31.428426835s" podCreationTimestamp="2025-09-05 23:55:22 +0000 UTC" firstStartedPulling="2025-09-05 23:55:23.134158052 +0000 UTC m=+45.242924085" lastFinishedPulling="2025-09-05 23:55:28.759473908 +0000 UTC m=+50.868239941" observedRunningTime="2025-09-05 23:55:29.083985278 +0000 UTC m=+51.192751323" watchObservedRunningTime="2025-09-05 23:55:53.428426835 +0000 UTC m=+75.537192892" Sep 5 23:55:53.537395 kubelet[2617]: I0905 23:55:53.537234 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-064559c6-11e7-4f8b-99f9-94ff3afb2538\" (UniqueName: \"kubernetes.io/nfs/fbe07fe6-d514-4a36-b274-05fc1b5b0910-pvc-064559c6-11e7-4f8b-99f9-94ff3afb2538\") pod \"test-pod-1\" (UID: \"fbe07fe6-d514-4a36-b274-05fc1b5b0910\") " pod="default/test-pod-1" Sep 5 23:55:53.537395 kubelet[2617]: I0905 23:55:53.537326 2617 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7jx\" (UniqueName: \"kubernetes.io/projected/fbe07fe6-d514-4a36-b274-05fc1b5b0910-kube-api-access-nz7jx\") pod \"test-pod-1\" (UID: \"fbe07fe6-d514-4a36-b274-05fc1b5b0910\") " pod="default/test-pod-1" Sep 5 23:55:53.681536 kernel: FS-Cache: Loaded Sep 5 23:55:53.682746 kubelet[2617]: E0905 23:55:53.682601 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:53.724765 kernel: RPC: Registered named UNIX socket transport module. Sep 5 23:55:53.724924 kernel: RPC: Registered udp transport module. Sep 5 23:55:53.724961 kernel: RPC: Registered tcp transport module. Sep 5 23:55:53.725595 kernel: RPC: Registered tcp-with-tls transport module. Sep 5 23:55:53.726740 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 5 23:55:54.068765 kernel: NFS: Registering the id_resolver key type Sep 5 23:55:54.068877 kernel: Key type id_resolver registered Sep 5 23:55:54.068957 kernel: Key type id_legacy registered Sep 5 23:55:54.107934 nfsidmap[4489]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 5 23:55:54.114298 nfsidmap[4490]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 5 23:55:54.335098 containerd[2120]: time="2025-09-05T23:55:54.334991811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fbe07fe6-d514-4a36-b274-05fc1b5b0910,Namespace:default,Attempt:0,}" Sep 5 23:55:54.546902 (udev-worker)[4478]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:55:54.548049 systemd-networkd[1689]: cali5ec59c6bf6e: Link UP Sep 5 23:55:54.550077 systemd-networkd[1689]: cali5ec59c6bf6e: Gained carrier Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.417 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.22.173-k8s-test--pod--1-eth0 default fbe07fe6-d514-4a36-b274-05fc1b5b0910 1434 0 2025-09-05 23:55:23 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.22.173 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.418 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.460 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" HandleID="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Workload="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.460 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" HandleID="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Workload="172.31.22.173-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"default", "node":"172.31.22.173", "pod":"test-pod-1", "timestamp":"2025-09-05 23:55:54.46066534 +0000 UTC"}, Hostname:"172.31.22.173", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.461 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.461 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.461 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.22.173' Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.482 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.493 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.503 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.507 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.511 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.511 [INFO][4504] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.516 [INFO][4504] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.525 [INFO][4504] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.539 [INFO][4504] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.4/26] block=192.168.13.0/26 handle="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.539 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.4/26] handle="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" host="172.31.22.173" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.539 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.539 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.4/26] IPv6=[] ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" HandleID="k8s-pod-network.5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Workload="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.569872 containerd[2120]: 2025-09-05 23:55:54.542 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fbe07fe6-d514-4a36-b274-05fc1b5b0910", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:54.577642 containerd[2120]: 2025-09-05 23:55:54.542 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.4/32] ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.577642 containerd[2120]: 2025-09-05 23:55:54.542 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.577642 containerd[2120]: 2025-09-05 23:55:54.551 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.577642 containerd[2120]: 2025-09-05 23:55:54.552 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.173-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fbe07fe6-d514-4a36-b274-05fc1b5b0910", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 55, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.22.173", ContainerID:"5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"7a:8f:49:0c:1f:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:54.577642 containerd[2120]: 2025-09-05 23:55:54.566 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.22.173-k8s-test--pod--1-eth0" Sep 5 23:55:54.622289 containerd[2120]: time="2025-09-05T23:55:54.622008545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:54.622289 containerd[2120]: time="2025-09-05T23:55:54.622107305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:54.623237 containerd[2120]: time="2025-09-05T23:55:54.622420985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:54.626215 containerd[2120]: time="2025-09-05T23:55:54.626112929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:54.684509 kubelet[2617]: E0905 23:55:54.683235 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:54.739357 containerd[2120]: time="2025-09-05T23:55:54.739167833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fbe07fe6-d514-4a36-b274-05fc1b5b0910,Namespace:default,Attempt:0,} returns sandbox id \"5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d\"" Sep 5 23:55:54.742252 containerd[2120]: time="2025-09-05T23:55:54.742040273Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 5 23:55:55.032531 containerd[2120]: time="2025-09-05T23:55:55.032224155Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:55.034499 containerd[2120]: time="2025-09-05T23:55:55.034452255Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 5 23:55:55.040095 containerd[2120]: time="2025-09-05T23:55:55.040021527Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 297.917954ms" Sep 5 23:55:55.040095 containerd[2120]: time="2025-09-05T23:55:55.040082907Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 5 23:55:55.043476 containerd[2120]: time="2025-09-05T23:55:55.043251003Z" level=info msg="CreateContainer within sandbox \"5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 5 23:55:55.068024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873874364.mount: Deactivated successfully. Sep 5 23:55:55.073667 containerd[2120]: time="2025-09-05T23:55:55.073584267Z" level=info msg="CreateContainer within sandbox \"5f8a9ab3669740c4829d53d05102a5de35930f9362e28d4d94c0811f56be556d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"da541e2c000819986349612b5a5e2bbc68b3f1177ce1cba2f2ea794e1b19b3cb\"" Sep 5 23:55:55.075000 containerd[2120]: time="2025-09-05T23:55:55.074949867Z" level=info msg="StartContainer for \"da541e2c000819986349612b5a5e2bbc68b3f1177ce1cba2f2ea794e1b19b3cb\"" Sep 5 23:55:55.170630 containerd[2120]: time="2025-09-05T23:55:55.170571099Z" level=info msg="StartContainer for \"da541e2c000819986349612b5a5e2bbc68b3f1177ce1cba2f2ea794e1b19b3cb\" returns successfully" Sep 5 23:55:55.683989 kubelet[2617]: E0905 23:55:55.683929 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:55.723991 systemd-networkd[1689]: cali5ec59c6bf6e: Gained IPv6LL Sep 5 23:55:56.108580 kubelet[2617]: I0905 23:55:56.108338 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.808720806 podStartE2EDuration="33.108282484s" podCreationTimestamp="2025-09-05 23:55:23 +0000 UTC" firstStartedPulling="2025-09-05 23:55:54.741549545 +0000 UTC m=+76.850315590" lastFinishedPulling="2025-09-05 23:55:55.041111235 +0000 UTC m=+77.149877268" observedRunningTime="2025-09-05 23:55:56.107963704 +0000 UTC m=+78.216729773" watchObservedRunningTime="2025-09-05 23:55:56.108282484 +0000 UTC m=+78.217048529" Sep 5 23:55:56.684410 kubelet[2617]: E0905 23:55:56.684337 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:57.684916 kubelet[2617]: E0905 23:55:57.684849 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:58.264668 ntpd[2081]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:55:58.265387 ntpd[2081]: 5 Sep 23:55:58 ntpd[2081]: Listen normally on 11 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:55:58.685337 kubelet[2617]: E0905 23:55:58.685254 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:59.622189 kubelet[2617]: E0905 23:55:59.622132 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:55:59.685879 kubelet[2617]: E0905 23:55:59.685804 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:00.686276 kubelet[2617]: E0905 23:56:00.686210 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:01.687034 kubelet[2617]: E0905 23:56:01.686977 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:02.687810 kubelet[2617]: E0905 23:56:02.687753 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:03.688854 kubelet[2617]: E0905 23:56:03.688787 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:04.689210 kubelet[2617]: E0905 23:56:04.689141 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:05.689854 kubelet[2617]: E0905 23:56:05.689767 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:06.690848 kubelet[2617]: E0905 23:56:06.690780 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:07.691210 kubelet[2617]: E0905 23:56:07.691146 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:08.692228 kubelet[2617]: E0905 23:56:08.692166 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:09.693230 kubelet[2617]: E0905 23:56:09.693165 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:10.693983 kubelet[2617]: E0905 23:56:10.693910 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:11.694678 kubelet[2617]: E0905 23:56:11.694604 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:12.695551 kubelet[2617]: E0905 23:56:12.695478 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:13.696584 kubelet[2617]: E0905 23:56:13.696508 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:14.697112 kubelet[2617]: E0905 23:56:14.697052 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:15.698079 kubelet[2617]: E0905 23:56:15.698031 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:16.698589 kubelet[2617]: E0905 23:56:16.698524 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:17.699393 kubelet[2617]: E0905 23:56:17.699335 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:18.699952 kubelet[2617]: E0905 23:56:18.699886 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:19.622114 kubelet[2617]: E0905 23:56:19.622052 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:19.700907 kubelet[2617]: E0905 23:56:19.700852 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:20.701088 kubelet[2617]: E0905 23:56:20.701004 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:21.678491 kubelet[2617]: E0905 23:56:21.678393 2617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 5 23:56:21.702143 kubelet[2617]: E0905 23:56:21.702094 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:22.702300 kubelet[2617]: E0905 23:56:22.702224 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:23.703469 kubelet[2617]: E0905 23:56:23.703407 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:24.704264 kubelet[2617]: E0905 23:56:24.704194 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:25.705424 kubelet[2617]: E0905 23:56:25.705364 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:26.705551 kubelet[2617]: E0905 23:56:26.705497 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:27.706443 kubelet[2617]: E0905 23:56:27.706368 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:28.707622 kubelet[2617]: E0905 23:56:28.707528 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:29.708549 kubelet[2617]: E0905 23:56:29.708481 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:30.708956 kubelet[2617]: E0905 23:56:30.708878 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:31.679476 kubelet[2617]: E0905 23:56:31.679379 2617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 5 23:56:31.710053 kubelet[2617]: E0905 23:56:31.709994 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:32.710703 kubelet[2617]: E0905 23:56:32.710627 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:33.711326 kubelet[2617]: E0905 23:56:33.711255 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:34.711937 kubelet[2617]: E0905 23:56:34.711868 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:35.712658 kubelet[2617]: E0905 23:56:35.712597 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:36.713373 kubelet[2617]: E0905 23:56:36.713281 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:37.713932 kubelet[2617]: E0905 23:56:37.713869 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:38.714782 kubelet[2617]: E0905 23:56:38.714722 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:39.622177 kubelet[2617]: E0905 23:56:39.622126 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:39.715087 kubelet[2617]: E0905 23:56:39.715021 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:40.715556 kubelet[2617]: E0905 23:56:40.715500 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:41.680516 kubelet[2617]: E0905 23:56:41.680156 2617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": context deadline exceeded" Sep 5 23:56:41.716699 kubelet[2617]: E0905 23:56:41.716649 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:42.717265 kubelet[2617]: E0905 23:56:42.717199 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:43.718092 kubelet[2617]: E0905 23:56:43.718019 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:44.271786 kubelet[2617]: E0905 23:56:44.271707 2617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": unexpected EOF" Sep 5 23:56:44.276351 kubelet[2617]: E0905 23:56:44.275004 2617 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" Sep 5 23:56:44.276351 kubelet[2617]: I0905 23:56:44.275084 2617 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 5 23:56:44.278707 kubelet[2617]: E0905 23:56:44.278634 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="200ms" Sep 5 23:56:44.479583 kubelet[2617]: E0905 23:56:44.479498 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="400ms" Sep 5 23:56:44.719097 kubelet[2617]: E0905 23:56:44.719030 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:44.880961 kubelet[2617]: E0905 23:56:44.880888 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="800ms" Sep 5 23:56:45.683010 kubelet[2617]: E0905 23:56:45.682668 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="1.6s" Sep 5 23:56:45.720249 kubelet[2617]: E0905 23:56:45.720174 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:45.813078 kubelet[2617]: E0905 23:56:45.812626 2617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.93:6443/api/v1/namespaces/calico-system/events\": dial tcp 172.31.22.93:6443: connect: connection refused" event=< Sep 5 23:56:45.813078 kubelet[2617]: &Event{ObjectMeta:{calico-node-wdvzh.1862883e4e5aa0af calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-wdvzh,UID:194bea03-e9a9-4677-b488-2fde364ba650,APIVersion:v1,ResourceVersion:973,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-09-05 23:56:45.804 [INFO][373] node/health.go 202: Number of node(s) with BGP peering established = 0 Sep 5 23:56:45.813078 kubelet[2617]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.22.93 Sep 5 23:56:45.813078 kubelet[2617]: ,Source:EventSource{Component:kubelet,Host:172.31.22.173,},FirstTimestamp:2025-09-05 23:56:45.811826863 +0000 UTC m=+127.920592920,LastTimestamp:2025-09-05 23:56:45.811826863 +0000 UTC m=+127.920592920,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.173,} Sep 5 23:56:45.813078 kubelet[2617]: > Sep 5 23:56:46.723458 kubelet[2617]: E0905 23:56:46.723366 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:47.283824 kubelet[2617]: E0905 23:56:47.283745 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": dial tcp 172.31.22.93:6443: connect: connection refused" interval="3.2s" Sep 5 23:56:47.724454 kubelet[2617]: E0905 23:56:47.724386 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:48.724786 kubelet[2617]: E0905 23:56:48.724723 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:49.725823 kubelet[2617]: E0905 23:56:49.725756 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:50.726668 kubelet[2617]: E0905 23:56:50.726600 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:51.727680 kubelet[2617]: E0905 23:56:51.727621 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:52.728353 kubelet[2617]: E0905 23:56:52.728281 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:53.728761 kubelet[2617]: E0905 23:56:53.728703 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:54.729052 kubelet[2617]: E0905 23:56:54.728996 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:55.729716 kubelet[2617]: E0905 23:56:55.729656 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:56.730687 kubelet[2617]: E0905 23:56:56.730621 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:57.731716 kubelet[2617]: E0905 23:56:57.731658 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:58.732110 kubelet[2617]: E0905 23:56:58.731998 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:59.622110 kubelet[2617]: E0905 23:56:59.622051 2617 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:56:59.733167 kubelet[2617]: E0905 23:56:59.733111 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:00.484986 kubelet[2617]: E0905 23:57:00.484911 2617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.173?timeout=10s\": context deadline exceeded" interval="6.4s" Sep 5 23:57:00.733778 kubelet[2617]: E0905 23:57:00.733720 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:01.734530 kubelet[2617]: E0905 23:57:01.734475 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:02.734993 kubelet[2617]: E0905 23:57:02.734929 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:03.736130 kubelet[2617]: E0905 23:57:03.736066 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:04.736907 kubelet[2617]: E0905 23:57:04.736847 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 5 23:57:05.737877 kubelet[2617]: E0905 23:57:05.737814 2617 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"