Sep 12 17:11:39.209400 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:11:39.209448 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:11:39.209473 kernel: KASLR disabled due to lack of seed Sep 12 17:11:39.211559 kernel: efi: EFI v2.7 by EDK II Sep 12 17:11:39.211631 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 17:11:39.211671 kernel: ACPI: Early table checksum verification disabled Sep 12 17:11:39.211714 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:11:39.211740 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:11:39.211761 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:11:39.211778 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:11:39.211807 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:11:39.211823 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:11:39.211839 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:11:39.211855 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:11:39.211875 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:11:39.211896 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:11:39.211914 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:11:39.211931 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:11:39.211948 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:11:39.211965 kernel: printk: bootconsole [uart0] enabled Sep 12 17:11:39.211981 kernel: NUMA: Failed to initialise from firmware Sep 12 17:11:39.211998 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:11:39.212015 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 17:11:39.212031 kernel: Zone ranges: Sep 12 17:11:39.212048 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:11:39.212066 kernel: DMA32 empty Sep 12 17:11:39.212096 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:11:39.212113 kernel: Movable zone start for each node Sep 12 17:11:39.212159 kernel: Early memory node ranges Sep 12 17:11:39.212179 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:11:39.212197 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:11:39.212213 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:11:39.212230 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:11:39.212246 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:11:39.212263 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:11:39.212279 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:11:39.212295 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:11:39.212312 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:11:39.212337 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:11:39.212355 kernel: psci: probing for conduit method from ACPI. Sep 12 17:11:39.212379 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:11:39.212397 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:11:39.212416 kernel: psci: Trusted OS migration not required Sep 12 17:11:39.212438 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:11:39.212457 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:11:39.212475 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:11:39.212535 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:11:39.212557 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:11:39.212575 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:11:39.212593 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:11:39.212610 kernel: CPU features: detected: Spectre-v2 Sep 12 17:11:39.212628 kernel: CPU features: detected: Spectre-v3a Sep 12 17:11:39.212646 kernel: CPU features: detected: Spectre-BHB Sep 12 17:11:39.212663 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:11:39.212687 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:11:39.212705 kernel: alternatives: applying boot alternatives Sep 12 17:11:39.212725 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:11:39.212745 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:11:39.212762 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:11:39.212781 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:11:39.212799 kernel: Fallback order for Node 0: 0 Sep 12 17:11:39.212816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 17:11:39.212833 kernel: Policy zone: Normal Sep 12 17:11:39.212851 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:11:39.212868 kernel: software IO TLB: area num 2. Sep 12 17:11:39.212891 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 17:11:39.212909 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 17:11:39.212927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:11:39.212945 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:11:39.212964 kernel: rcu: RCU event tracing is enabled. Sep 12 17:11:39.212982 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:11:39.213000 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:11:39.213018 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:11:39.213036 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:11:39.213053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:11:39.213070 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:11:39.213093 kernel: GICv3: 96 SPIs implemented Sep 12 17:11:39.213111 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:11:39.213129 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:11:39.213149 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:11:39.213166 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:11:39.213184 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:11:39.213207 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:11:39.213227 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:11:39.213244 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 17:11:39.213262 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:11:39.213279 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 17:11:39.213297 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:11:39.213319 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:11:39.213337 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:11:39.213354 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:11:39.213372 kernel: Console: colour dummy device 80x25 Sep 12 17:11:39.213390 kernel: printk: console [tty1] enabled Sep 12 17:11:39.213408 kernel: ACPI: Core revision 20230628 Sep 12 17:11:39.213427 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:11:39.213445 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:11:39.213463 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:11:39.213505 kernel: landlock: Up and running. Sep 12 17:11:39.213530 kernel: SELinux: Initializing. Sep 12 17:11:39.213549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:11:39.213567 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:11:39.213585 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:11:39.213603 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:11:39.213621 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:11:39.213639 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:11:39.213658 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 17:11:39.213683 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 17:11:39.213701 kernel: Remapping and enabling EFI services. Sep 12 17:11:39.213719 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:11:39.213737 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:11:39.213755 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:11:39.213774 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 17:11:39.213792 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:11:39.213810 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:11:39.213828 kernel: SMP: Total of 2 processors activated. Sep 12 17:11:39.213846 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:11:39.213869 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:11:39.213888 kernel: CPU features: detected: CRC32 instructions Sep 12 17:11:39.213918 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:11:39.213944 kernel: alternatives: applying system-wide alternatives Sep 12 17:11:39.213963 kernel: devtmpfs: initialized Sep 12 17:11:39.213982 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:11:39.214003 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:11:39.214022 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:11:39.214042 kernel: SMBIOS 3.0.0 present. Sep 12 17:11:39.214067 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:11:39.214115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:11:39.214137 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:11:39.214156 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:11:39.214176 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:11:39.214196 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:11:39.214216 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Sep 12 17:11:39.214242 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:11:39.214262 kernel: cpuidle: using governor menu Sep 12 17:11:39.214284 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:11:39.214303 kernel: ASID allocator initialised with 65536 entries Sep 12 17:11:39.214323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:11:39.214342 kernel: Serial: AMBA PL011 UART driver Sep 12 17:11:39.214361 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 17:11:39.214381 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:11:39.214400 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:11:39.214426 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:11:39.214446 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:11:39.214466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:11:39.214509 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:11:39.215651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:11:39.215675 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:11:39.215694 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:11:39.215713 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:11:39.215731 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:11:39.215759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:11:39.215778 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:11:39.215797 kernel: ACPI: Interpreter enabled Sep 12 17:11:39.215815 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:11:39.215833 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:11:39.215853 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:11:39.216159 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:11:39.216450 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:11:39.216760 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:11:39.216988 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:11:39.217216 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:11:39.217244 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:11:39.217264 kernel: acpiphp: Slot [1] registered Sep 12 17:11:39.217283 kernel: acpiphp: Slot [2] registered Sep 12 17:11:39.217302 kernel: acpiphp: Slot [3] registered Sep 12 17:11:39.217321 kernel: acpiphp: Slot [4] registered Sep 12 17:11:39.217350 kernel: acpiphp: Slot [5] registered Sep 12 17:11:39.217369 kernel: acpiphp: Slot [6] registered Sep 12 17:11:39.217388 kernel: acpiphp: Slot [7] registered Sep 12 17:11:39.217406 kernel: acpiphp: Slot [8] registered Sep 12 17:11:39.217425 kernel: acpiphp: Slot [9] registered Sep 12 17:11:39.217443 kernel: acpiphp: Slot [10] registered Sep 12 17:11:39.217462 kernel: acpiphp: Slot [11] registered Sep 12 17:11:39.217481 kernel: acpiphp: Slot [12] registered Sep 12 17:11:39.218597 kernel: acpiphp: Slot [13] registered Sep 12 17:11:39.218620 kernel: acpiphp: Slot [14] registered Sep 12 17:11:39.218652 kernel: acpiphp: Slot [15] registered Sep 12 17:11:39.218672 kernel: acpiphp: Slot [16] registered Sep 12 17:11:39.218691 kernel: acpiphp: Slot [17] registered Sep 12 17:11:39.218710 kernel: acpiphp: Slot [18] registered Sep 12 17:11:39.218728 kernel: acpiphp: Slot [19] registered Sep 12 17:11:39.218747 kernel: acpiphp: Slot [20] registered Sep 12 17:11:39.218765 kernel: acpiphp: Slot [21] registered Sep 12 17:11:39.218784 kernel: acpiphp: Slot [22] registered Sep 12 17:11:39.218802 kernel: acpiphp: Slot [23] registered Sep 12 17:11:39.218826 kernel: acpiphp: Slot [24] registered Sep 12 17:11:39.218845 kernel: acpiphp: Slot [25] registered Sep 12 17:11:39.218866 kernel: acpiphp: Slot [26] registered Sep 12 17:11:39.218885 kernel: acpiphp: Slot [27] registered Sep 12 17:11:39.218904 kernel: acpiphp: Slot [28] registered Sep 12 17:11:39.218922 kernel: acpiphp: Slot [29] registered Sep 12 17:11:39.218941 kernel: acpiphp: Slot [30] registered Sep 12 17:11:39.218960 kernel: acpiphp: Slot [31] registered Sep 12 17:11:39.218978 kernel: PCI host bridge to bus 0000:00 Sep 12 17:11:39.219248 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:11:39.219442 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:11:39.221760 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:11:39.221979 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:11:39.222262 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 17:11:39.222519 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 17:11:39.222750 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 17:11:39.223007 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:11:39.223268 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 17:11:39.223526 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:11:39.223762 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:11:39.223977 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 17:11:39.224196 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 17:11:39.224425 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 17:11:39.224703 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:11:39.224920 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 17:11:39.225133 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 17:11:39.225356 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 17:11:39.225596 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 17:11:39.225815 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 17:11:39.226011 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:11:39.226235 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:11:39.226428 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:11:39.226453 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:11:39.226474 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:11:39.230791 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:11:39.230828 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:11:39.230847 kernel: iommu: Default domain type: Translated Sep 12 17:11:39.230867 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:11:39.230896 kernel: efivars: Registered efivars operations Sep 12 17:11:39.230915 kernel: vgaarb: loaded Sep 12 17:11:39.230934 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:11:39.230953 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:11:39.230971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:11:39.230991 kernel: pnp: PnP ACPI init Sep 12 17:11:39.231267 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:11:39.231296 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:11:39.231321 kernel: NET: Registered PF_INET protocol family Sep 12 17:11:39.231341 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:11:39.231360 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:11:39.231379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:11:39.231398 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:11:39.231417 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:11:39.231436 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:11:39.231455 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:11:39.231474 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:11:39.231518 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:11:39.231539 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:11:39.231558 kernel: kvm [1]: HYP mode not available Sep 12 17:11:39.231577 kernel: Initialise system trusted keyrings Sep 12 17:11:39.231596 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:11:39.231614 kernel: Key type asymmetric registered Sep 12 17:11:39.231633 kernel: Asymmetric key parser 'x509' registered Sep 12 17:11:39.231651 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:11:39.231670 kernel: io scheduler mq-deadline registered Sep 12 17:11:39.231695 kernel: io scheduler kyber registered Sep 12 17:11:39.231714 kernel: io scheduler bfq registered Sep 12 17:11:39.231942 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:11:39.231970 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:11:39.231989 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:11:39.232009 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:11:39.232027 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:11:39.232046 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:11:39.232071 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:11:39.232284 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:11:39.232311 kernel: printk: console [ttyS0] disabled Sep 12 17:11:39.232330 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:11:39.232351 kernel: printk: console [ttyS0] enabled Sep 12 17:11:39.232369 kernel: printk: bootconsole [uart0] disabled Sep 12 17:11:39.232388 kernel: thunder_xcv, ver 1.0 Sep 12 17:11:39.232406 kernel: thunder_bgx, ver 1.0 Sep 12 17:11:39.232425 kernel: nicpf, ver 1.0 Sep 12 17:11:39.232449 kernel: nicvf, ver 1.0 Sep 12 17:11:39.234781 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:11:39.235002 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:11:38 UTC (1757697098) Sep 12 17:11:39.235030 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:11:39.235050 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 17:11:39.235069 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:11:39.235088 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:11:39.235106 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:11:39.235135 kernel: Segment Routing with IPv6 Sep 12 17:11:39.235155 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:11:39.235174 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:11:39.235193 kernel: Key type dns_resolver registered Sep 12 17:11:39.235211 kernel: registered taskstats version 1 Sep 12 17:11:39.235230 kernel: Loading compiled-in X.509 certificates Sep 12 17:11:39.235249 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:11:39.235268 kernel: Key type .fscrypt registered Sep 12 17:11:39.235286 kernel: Key type fscrypt-provisioning registered Sep 12 17:11:39.235309 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:11:39.235329 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:11:39.235347 kernel: ima: No architecture policies found Sep 12 17:11:39.235366 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:11:39.235385 kernel: clk: Disabling unused clocks Sep 12 17:11:39.235403 kernel: Freeing unused kernel memory: 39488K Sep 12 17:11:39.235422 kernel: Run /init as init process Sep 12 17:11:39.235440 kernel: with arguments: Sep 12 17:11:39.235458 kernel: /init Sep 12 17:11:39.235477 kernel: with environment: Sep 12 17:11:39.236108 kernel: HOME=/ Sep 12 17:11:39.236129 kernel: TERM=linux Sep 12 17:11:39.236147 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:11:39.236171 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:11:39.236196 systemd[1]: Detected virtualization amazon. Sep 12 17:11:39.236217 systemd[1]: Detected architecture arm64. Sep 12 17:11:39.236237 systemd[1]: Running in initrd. Sep 12 17:11:39.236262 systemd[1]: No hostname configured, using default hostname. Sep 12 17:11:39.236283 systemd[1]: Hostname set to . Sep 12 17:11:39.236304 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:11:39.236324 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:11:39.236345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:39.236366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:39.236388 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:11:39.236409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:11:39.236435 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:11:39.236457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:11:39.236480 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:11:39.236632 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:11:39.236653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:39.236674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:39.236695 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:11:39.236722 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:11:39.236743 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:11:39.236763 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:11:39.236783 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:11:39.236804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:11:39.236824 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:11:39.236845 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:11:39.236865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:39.236885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:39.236911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:39.236931 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:11:39.236952 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:11:39.236972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:11:39.236993 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:11:39.237013 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:11:39.237034 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:11:39.237055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:11:39.237080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:39.237101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:11:39.237122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:39.237141 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:11:39.237203 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 17:11:39.237253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:11:39.237275 systemd-journald[251]: Journal started Sep 12 17:11:39.237318 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2450957b7ab346adc1d62d247ec1b3) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:11:39.214584 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 17:11:39.249527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:11:39.249591 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:11:39.252837 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 17:11:39.255774 kernel: Bridge firewalling registered Sep 12 17:11:39.258814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:39.266590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:39.274197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:11:39.288906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:39.302814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:39.315856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:11:39.327966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:11:39.353206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:39.371178 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:39.373554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:39.385394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:39.397816 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:11:39.407809 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:11:39.435316 dracut-cmdline[288]: dracut-dracut-053 Sep 12 17:11:39.444072 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:11:39.500947 systemd-resolved[290]: Positive Trust Anchors: Sep 12 17:11:39.500983 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:11:39.501045 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:11:39.604518 kernel: SCSI subsystem initialized Sep 12 17:11:39.611529 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:11:39.624533 kernel: iscsi: registered transport (tcp) Sep 12 17:11:39.646927 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:11:39.646998 kernel: QLogic iSCSI HBA Driver Sep 12 17:11:39.730819 kernel: random: crng init done Sep 12 17:11:39.730788 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 12 17:11:39.734825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:11:39.739635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:39.763463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:11:39.775821 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:11:39.810323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:11:39.810398 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:11:39.812519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:11:39.880541 kernel: raid6: neonx8 gen() 6783 MB/s Sep 12 17:11:39.895524 kernel: raid6: neonx4 gen() 6587 MB/s Sep 12 17:11:39.912521 kernel: raid6: neonx2 gen() 5479 MB/s Sep 12 17:11:39.929521 kernel: raid6: neonx1 gen() 3958 MB/s Sep 12 17:11:39.946521 kernel: raid6: int64x8 gen() 3812 MB/s Sep 12 17:11:39.963521 kernel: raid6: int64x4 gen() 3719 MB/s Sep 12 17:11:39.980520 kernel: raid6: int64x2 gen() 3603 MB/s Sep 12 17:11:39.998502 kernel: raid6: int64x1 gen() 2759 MB/s Sep 12 17:11:39.998538 kernel: raid6: using algorithm neonx8 gen() 6783 MB/s Sep 12 17:11:40.017506 kernel: raid6: .... xor() 4757 MB/s, rmw enabled Sep 12 17:11:40.017554 kernel: raid6: using neon recovery algorithm Sep 12 17:11:40.025525 kernel: xor: measuring software checksum speed Sep 12 17:11:40.026521 kernel: 8regs : 10261 MB/sec Sep 12 17:11:40.028933 kernel: 32regs : 11034 MB/sec Sep 12 17:11:40.028966 kernel: arm64_neon : 9565 MB/sec Sep 12 17:11:40.028991 kernel: xor: using function: 32regs (11034 MB/sec) Sep 12 17:11:40.114542 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:11:40.134384 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:11:40.144816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:40.179220 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 17:11:40.187288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:40.202878 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:11:40.234467 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 12 17:11:40.290283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:11:40.304262 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:11:40.418165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:40.429784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:11:40.485448 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:11:40.492423 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:11:40.496066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:40.497392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:11:40.511836 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:11:40.542022 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:11:40.615700 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:11:40.615764 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:11:40.629086 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:11:40.629419 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:11:40.636275 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:11:40.636516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:40.647555 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:17:62:3a:8d:f3 Sep 12 17:11:40.647817 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:40.652444 (udev-worker)[524]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:40.654239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:11:40.654553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:40.659948 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:40.678629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:40.704514 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:11:40.704579 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:11:40.713569 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:11:40.725519 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:11:40.725589 kernel: GPT:9289727 != 16777215 Sep 12 17:11:40.725615 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:11:40.725649 kernel: GPT:9289727 != 16777215 Sep 12 17:11:40.725674 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:11:40.725699 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:40.728365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:40.741909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:11:40.793333 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:40.837590 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (528) Sep 12 17:11:40.849543 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (532) Sep 12 17:11:40.887796 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:11:40.930124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:11:40.972925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:11:40.978539 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:11:40.993746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:11:41.004949 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:11:41.021374 disk-uuid[663]: Primary Header is updated. Sep 12 17:11:41.021374 disk-uuid[663]: Secondary Entries is updated. Sep 12 17:11:41.021374 disk-uuid[663]: Secondary Header is updated. Sep 12 17:11:41.035528 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:41.046530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:41.056604 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:42.058564 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:11:42.059412 disk-uuid[664]: The operation has completed successfully. Sep 12 17:11:42.232728 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:11:42.235170 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:11:42.300740 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:11:42.310985 sh[1009]: Success Sep 12 17:11:42.344882 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:11:42.462760 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:11:42.483717 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:11:42.486227 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:11:42.531771 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:11:42.531844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:42.533754 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:11:42.535183 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:11:42.536380 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:11:42.661524 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:11:42.700416 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:11:42.704332 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:11:42.715846 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:11:42.722793 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:11:42.751117 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:42.751192 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:42.753108 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:42.775688 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:42.795122 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:11:42.799584 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:42.807989 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:11:42.817867 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:11:42.913845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:11:42.928947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:11:42.981328 systemd-networkd[1213]: lo: Link UP Sep 12 17:11:42.981342 systemd-networkd[1213]: lo: Gained carrier Sep 12 17:11:42.984686 systemd-networkd[1213]: Enumeration completed Sep 12 17:11:42.984833 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:11:42.986381 systemd-networkd[1213]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:42.986389 systemd-networkd[1213]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:11:42.994448 systemd[1]: Reached target network.target - Network. Sep 12 17:11:42.994780 systemd-networkd[1213]: eth0: Link UP Sep 12 17:11:42.994788 systemd-networkd[1213]: eth0: Gained carrier Sep 12 17:11:42.994806 systemd-networkd[1213]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:43.027582 systemd-networkd[1213]: eth0: DHCPv4 address 172.31.18.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:11:43.228162 ignition[1138]: Ignition 2.19.0 Sep 12 17:11:43.228719 ignition[1138]: Stage: fetch-offline Sep 12 17:11:43.230402 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:43.230427 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:43.231845 ignition[1138]: Ignition finished successfully Sep 12 17:11:43.240856 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:11:43.254338 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:11:43.279888 ignition[1224]: Ignition 2.19.0 Sep 12 17:11:43.280404 ignition[1224]: Stage: fetch Sep 12 17:11:43.281096 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:43.281122 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:43.281275 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:43.297446 ignition[1224]: PUT result: OK Sep 12 17:11:43.301830 ignition[1224]: parsed url from cmdline: "" Sep 12 17:11:43.301853 ignition[1224]: no config URL provided Sep 12 17:11:43.301869 ignition[1224]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:11:43.301894 ignition[1224]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:11:43.301931 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:43.306362 ignition[1224]: PUT result: OK Sep 12 17:11:43.306437 ignition[1224]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:11:43.311454 ignition[1224]: GET result: OK Sep 12 17:11:43.316789 ignition[1224]: parsing config with SHA512: ec6f4db8bc4aaae64a35cc2530b79018007399741b447104c8b84a7cf8765e36c5f9d3583400b5cd63138799d5899055221ddb100c9ad6bb968236f27b9ae3d5 Sep 12 17:11:43.325878 unknown[1224]: fetched base config from "system" Sep 12 17:11:43.326136 unknown[1224]: fetched base config from "system" Sep 12 17:11:43.327275 ignition[1224]: fetch: fetch complete Sep 12 17:11:43.326151 unknown[1224]: fetched user config from "aws" Sep 12 17:11:43.327288 ignition[1224]: fetch: fetch passed Sep 12 17:11:43.327377 ignition[1224]: Ignition finished successfully Sep 12 17:11:43.338861 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:11:43.351807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:11:43.385184 ignition[1231]: Ignition 2.19.0 Sep 12 17:11:43.385217 ignition[1231]: Stage: kargs Sep 12 17:11:43.386707 ignition[1231]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:43.386736 ignition[1231]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:43.386904 ignition[1231]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:43.388888 ignition[1231]: PUT result: OK Sep 12 17:11:43.396777 ignition[1231]: kargs: kargs passed Sep 12 17:11:43.401769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:11:43.396950 ignition[1231]: Ignition finished successfully Sep 12 17:11:43.416881 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:11:43.443809 ignition[1237]: Ignition 2.19.0 Sep 12 17:11:43.443828 ignition[1237]: Stage: disks Sep 12 17:11:43.444469 ignition[1237]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:43.445030 ignition[1237]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:43.445190 ignition[1237]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:43.454290 ignition[1237]: PUT result: OK Sep 12 17:11:43.461628 ignition[1237]: disks: disks passed Sep 12 17:11:43.461805 ignition[1237]: Ignition finished successfully Sep 12 17:11:43.465475 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:11:43.468865 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:11:43.471602 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:11:43.474407 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:11:43.483760 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:11:43.486800 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:11:43.505790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:11:43.559330 systemd-fsck[1245]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:11:43.563094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:11:43.575704 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:11:43.663538 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:11:43.663722 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:11:43.667748 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:11:43.688658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:11:43.697838 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:11:43.701283 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:11:43.701368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:11:43.701415 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:11:43.725938 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1264) Sep 12 17:11:43.727241 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:11:43.742393 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:43.742443 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:43.742471 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:43.746871 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:11:43.756600 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:43.759631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:11:44.231449 initrd-setup-root[1288]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:11:44.253813 initrd-setup-root[1295]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:11:44.263168 initrd-setup-root[1302]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:11:44.271754 initrd-setup-root[1309]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:11:44.642845 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:11:44.653921 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:11:44.663839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:11:44.684538 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:44.684618 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:11:44.700652 systemd-networkd[1213]: eth0: Gained IPv6LL Sep 12 17:11:44.728578 ignition[1377]: INFO : Ignition 2.19.0 Sep 12 17:11:44.728578 ignition[1377]: INFO : Stage: mount Sep 12 17:11:44.734368 ignition[1377]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:44.734368 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:44.734368 ignition[1377]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:44.744396 ignition[1377]: INFO : PUT result: OK Sep 12 17:11:44.752364 ignition[1377]: INFO : mount: mount passed Sep 12 17:11:44.754190 ignition[1377]: INFO : Ignition finished successfully Sep 12 17:11:44.761532 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:11:44.764692 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:11:44.778706 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:11:44.806891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:11:44.830523 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1388) Sep 12 17:11:44.835611 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:11:44.835655 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:11:44.835682 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:11:44.841521 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:11:44.845059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:11:44.887136 ignition[1404]: INFO : Ignition 2.19.0 Sep 12 17:11:44.887136 ignition[1404]: INFO : Stage: files Sep 12 17:11:44.887136 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:44.887136 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:44.887136 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:44.898919 ignition[1404]: INFO : PUT result: OK Sep 12 17:11:44.903781 ignition[1404]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:11:44.907299 ignition[1404]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:11:44.907299 ignition[1404]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:11:44.965223 ignition[1404]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:11:44.968539 ignition[1404]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:11:44.971797 unknown[1404]: wrote ssh authorized keys file for user: core Sep 12 17:11:44.974230 ignition[1404]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:11:44.980131 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:11:44.983975 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:11:44.983975 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:11:44.992751 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:11:45.082837 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:11:45.314405 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:11:45.314405 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:11:45.322634 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:11:45.390282 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 12 17:11:45.521386 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:11:45.525460 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:11:45.786947 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 12 17:11:46.114109 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:11:46.114109 ignition[1404]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:11:46.122805 ignition[1404]: INFO : files: files passed Sep 12 17:11:46.122805 ignition[1404]: INFO : Ignition finished successfully Sep 12 17:11:46.167276 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:11:46.176750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:11:46.190847 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:11:46.205802 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:11:46.206023 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:11:46.223323 initrd-setup-root-after-ignition[1433]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:46.223323 initrd-setup-root-after-ignition[1433]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:46.230630 initrd-setup-root-after-ignition[1437]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:11:46.236479 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:11:46.242301 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:11:46.257482 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:11:46.305914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:11:46.306130 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:11:46.309212 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:11:46.313261 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:11:46.315634 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:11:46.330820 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:11:46.377573 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:11:46.388782 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:11:46.411475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:46.419661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:46.420291 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:11:46.421100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:11:46.421403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:11:46.425461 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:11:46.426194 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:11:46.426572 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:11:46.426912 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:11:46.427267 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:11:46.427897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:11:46.428464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:11:46.429194 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:11:46.429585 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:11:46.429907 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:11:46.432263 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:11:46.437063 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:11:46.439414 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:46.440266 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:46.440543 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:11:46.454228 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:46.459886 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:11:46.460206 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:11:46.478344 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:11:46.479018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:11:46.484210 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:11:46.484840 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:11:46.506603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:11:46.532324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:11:46.532626 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:46.544963 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:11:46.546997 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:11:46.547467 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:46.558695 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:11:46.562750 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:11:46.579039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:11:46.579263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:11:46.598020 ignition[1458]: INFO : Ignition 2.19.0 Sep 12 17:11:46.598020 ignition[1458]: INFO : Stage: umount Sep 12 17:11:46.603617 ignition[1458]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:11:46.603617 ignition[1458]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:11:46.603617 ignition[1458]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:11:46.614116 ignition[1458]: INFO : PUT result: OK Sep 12 17:11:46.614943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:11:46.625327 ignition[1458]: INFO : umount: umount passed Sep 12 17:11:46.627416 ignition[1458]: INFO : Ignition finished successfully Sep 12 17:11:46.629820 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:11:46.630042 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:11:46.637428 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:11:46.637839 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:11:46.642523 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:11:46.642687 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:11:46.646893 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:11:46.646978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:11:46.649280 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:11:46.649662 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:11:46.656387 systemd[1]: Stopped target network.target - Network. Sep 12 17:11:46.656989 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:11:46.657084 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:11:46.673199 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:11:46.675285 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:11:46.679790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:46.685857 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:11:46.688002 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:11:46.690263 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:11:46.690364 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:11:46.696981 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:11:46.697403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:11:46.709755 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:11:46.709886 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:11:46.712124 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:11:46.712212 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:11:46.714604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:11:46.714682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:11:46.717334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:11:46.720340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:11:46.731864 systemd-networkd[1213]: eth0: DHCPv6 lease lost Sep 12 17:11:46.739231 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:11:46.739441 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:11:46.750251 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:11:46.750382 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:46.762646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:11:46.765196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:11:46.765323 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:11:46.778132 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:46.793868 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:11:46.794108 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:11:46.798129 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:11:46.798259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:46.801361 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:11:46.801468 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:46.813067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:11:46.813178 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:46.832617 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:11:46.834285 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:46.839975 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:11:46.840115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:46.843145 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:11:46.843228 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:46.856807 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:11:46.857091 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:11:46.864688 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:11:46.864792 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:11:46.871455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:11:46.871571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:11:46.887984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:11:46.890761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:11:46.890870 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:46.901677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:11:46.901790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:46.905086 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:11:46.906354 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:11:46.918131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:11:46.919941 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:11:46.927777 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:11:46.941857 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:11:46.991306 systemd[1]: Switching root. Sep 12 17:11:47.029575 systemd-journald[251]: Journal stopped Sep 12 17:11:50.038247 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 17:11:50.038390 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:11:50.038441 kernel: SELinux: policy capability open_perms=1 Sep 12 17:11:50.038481 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:11:50.038536 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:11:50.038568 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:11:50.038608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:11:50.038639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:11:50.038670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:11:50.038702 kernel: audit: type=1403 audit(1757697107.902:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:11:50.038738 systemd[1]: Successfully loaded SELinux policy in 87.295ms. Sep 12 17:11:50.038787 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.004ms. Sep 12 17:11:50.038823 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:11:50.038855 systemd[1]: Detected virtualization amazon. Sep 12 17:11:50.038886 systemd[1]: Detected architecture arm64. Sep 12 17:11:50.038918 systemd[1]: Detected first boot. Sep 12 17:11:50.038948 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:11:50.038982 zram_generator::config[1518]: No configuration found. Sep 12 17:11:50.039019 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:11:50.039055 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:11:50.039089 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:11:50.039125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:11:50.039158 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:11:50.039191 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:11:50.039224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:11:50.039257 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:11:50.039291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:11:50.039323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:11:50.039358 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:11:50.039390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:11:50.039423 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:11:50.039453 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:11:50.041613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:11:50.041677 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:11:50.041713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:11:50.041744 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:11:50.041784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:11:50.041816 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:11:50.041846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:11:50.041878 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:11:50.041911 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:11:50.041941 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:11:50.041970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:11:50.042002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:11:50.042039 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:11:50.042094 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:11:50.042128 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:11:50.042160 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:11:50.042193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:11:50.042223 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:11:50.042256 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:11:50.042293 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:11:50.042584 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:11:50.042629 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:11:50.042667 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:11:50.054002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:11:50.054101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:11:50.054140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:50.054184 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:11:50.054223 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:11:50.054258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:11:50.054291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:11:50.054330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:11:50.054373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:11:50.054405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:11:50.054436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:11:50.054466 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:11:50.054523 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:11:50.054560 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:11:50.054591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:11:50.054621 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:11:50.054657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:11:50.054687 kernel: fuse: init (API version 7.39) Sep 12 17:11:50.054718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:11:50.054751 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:11:50.054782 kernel: ACPI: bus type drm_connector registered Sep 12 17:11:50.054813 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:11:50.054844 kernel: loop: module loaded Sep 12 17:11:50.054872 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:11:50.054902 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:11:50.054938 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:11:50.054968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:11:50.054999 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:11:50.055029 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:11:50.055058 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:11:50.055088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:11:50.055118 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:11:50.055198 systemd-journald[1615]: Collecting audit messages is disabled. Sep 12 17:11:50.055256 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:11:50.055288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:11:50.055318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:11:50.055351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:11:50.055386 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:11:50.055418 systemd-journald[1615]: Journal started Sep 12 17:11:50.055466 systemd-journald[1615]: Runtime Journal (/run/log/journal/ec2450957b7ab346adc1d62d247ec1b3) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:11:50.065224 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:11:50.065296 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:11:50.068225 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:11:50.071876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:11:50.076398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:11:50.080369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:11:50.084003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:11:50.106901 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:11:50.122233 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:11:50.131686 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:11:50.143646 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:11:50.146261 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:11:50.168889 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:11:50.181871 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:11:50.184676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:11:50.200797 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:11:50.204112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:11:50.214164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:11:50.228752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:11:50.239682 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:11:50.246871 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:11:50.261707 systemd-journald[1615]: Time spent on flushing to /var/log/journal/ec2450957b7ab346adc1d62d247ec1b3 is 45.164ms for 898 entries. Sep 12 17:11:50.261707 systemd-journald[1615]: System Journal (/var/log/journal/ec2450957b7ab346adc1d62d247ec1b3) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:11:50.328520 systemd-journald[1615]: Received client request to flush runtime journal. Sep 12 17:11:50.260423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:11:50.276708 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:11:50.341222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:11:50.352439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:11:50.367918 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:11:50.371578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:11:50.402143 udevadm[1683]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:11:50.420013 systemd-tmpfiles[1670]: ACLs are not supported, ignoring. Sep 12 17:11:50.420056 systemd-tmpfiles[1670]: ACLs are not supported, ignoring. Sep 12 17:11:50.428271 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:11:50.442997 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:11:50.512982 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:11:50.528833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:11:50.573944 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 12 17:11:50.574558 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Sep 12 17:11:50.584059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:11:51.328230 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:11:51.345866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:11:51.393089 systemd-udevd[1698]: Using default interface naming scheme 'v255'. Sep 12 17:11:51.496695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:11:51.510893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:11:51.543296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:11:51.652692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:11:51.706474 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:11:51.734217 (udev-worker)[1711]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:51.868199 systemd-networkd[1702]: lo: Link UP Sep 12 17:11:51.868226 systemd-networkd[1702]: lo: Gained carrier Sep 12 17:11:51.871296 systemd-networkd[1702]: Enumeration completed Sep 12 17:11:51.872468 systemd-networkd[1702]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:51.872680 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:11:51.877532 systemd-networkd[1702]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:11:51.888667 systemd-networkd[1702]: eth0: Link UP Sep 12 17:11:51.889043 systemd-networkd[1702]: eth0: Gained carrier Sep 12 17:11:51.889093 systemd-networkd[1702]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:11:51.897047 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:11:51.906720 systemd-networkd[1702]: eth0: DHCPv4 address 172.31.18.149/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:11:51.957071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:11:52.008545 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1711) Sep 12 17:11:52.148728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:11:52.221835 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:11:52.251101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:11:52.273747 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:11:52.308569 lvm[1827]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:11:52.348051 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:11:52.351112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:11:52.361126 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:11:52.377599 lvm[1830]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:11:52.421623 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:11:52.428056 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:11:52.431365 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:11:52.431876 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:11:52.434571 systemd[1]: Reached target machines.target - Containers. Sep 12 17:11:52.439334 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:11:52.450823 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:11:52.461836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:11:52.467403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:52.470007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:11:52.481975 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:11:52.503825 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:11:52.517445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:11:52.547987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:11:52.559135 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:11:52.562843 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:11:52.572571 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 17:11:52.685551 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:11:52.715525 kernel: loop1: detected capacity change from 0 to 203944 Sep 12 17:11:52.777536 kernel: loop2: detected capacity change from 0 to 114328 Sep 12 17:11:52.867547 kernel: loop3: detected capacity change from 0 to 52536 Sep 12 17:11:52.970768 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 17:11:52.983523 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 17:11:53.008545 kernel: loop6: detected capacity change from 0 to 114328 Sep 12 17:11:53.024595 kernel: loop7: detected capacity change from 0 to 52536 Sep 12 17:11:53.039305 (sd-merge)[1851]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:11:53.040308 (sd-merge)[1851]: Merged extensions into '/usr'. Sep 12 17:11:53.049656 systemd[1]: Reloading requested from client PID 1838 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:11:53.049689 systemd[1]: Reloading... Sep 12 17:11:53.184550 zram_generator::config[1879]: No configuration found. Sep 12 17:11:53.480650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:11:53.635433 systemd[1]: Reloading finished in 584 ms. Sep 12 17:11:53.673233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:11:53.691946 systemd[1]: Starting ensure-sysext.service... Sep 12 17:11:53.698949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:11:53.719640 systemd[1]: Reloading requested from client PID 1936 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:11:53.719672 systemd[1]: Reloading... Sep 12 17:11:53.760443 systemd-tmpfiles[1937]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:11:53.761713 systemd-tmpfiles[1937]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:11:53.763656 systemd-tmpfiles[1937]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:11:53.764239 systemd-tmpfiles[1937]: ACLs are not supported, ignoring. Sep 12 17:11:53.764377 systemd-tmpfiles[1937]: ACLs are not supported, ignoring. Sep 12 17:11:53.770283 systemd-tmpfiles[1937]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:11:53.770309 systemd-tmpfiles[1937]: Skipping /boot Sep 12 17:11:53.793455 systemd-networkd[1702]: eth0: Gained IPv6LL Sep 12 17:11:53.803806 systemd-tmpfiles[1937]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:11:53.803827 systemd-tmpfiles[1937]: Skipping /boot Sep 12 17:11:53.884553 ldconfig[1834]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:11:53.897541 zram_generator::config[1967]: No configuration found. Sep 12 17:11:54.155761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:11:54.310194 systemd[1]: Reloading finished in 589 ms. Sep 12 17:11:54.345365 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:11:54.350427 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:11:54.365502 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:11:54.393794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:11:54.399774 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:11:54.406119 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:11:54.423749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:11:54.442749 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:11:54.466378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:54.478814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:11:54.490956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:11:54.515417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:11:54.518000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:54.533235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:54.533754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:54.550648 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:11:54.561388 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:11:54.569139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:11:54.572636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:11:54.599280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:11:54.602519 augenrules[2062]: No rules Sep 12 17:11:54.607206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:11:54.609263 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:11:54.619447 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:11:54.625079 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:11:54.627074 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:11:54.652828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:11:54.662907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:11:54.677096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:11:54.694166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:11:54.705300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:11:54.712949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:11:54.715351 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:11:54.723453 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:11:54.729352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:11:54.730293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:11:54.743925 systemd[1]: Finished ensure-sysext.service. Sep 12 17:11:54.748042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:11:54.748668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:11:54.752648 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:11:54.753001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:11:54.776357 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:11:54.779049 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:11:54.783787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:11:54.789662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:11:54.809621 systemd-resolved[2037]: Positive Trust Anchors: Sep 12 17:11:54.809657 systemd-resolved[2037]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:11:54.809722 systemd-resolved[2037]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:11:54.814500 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:11:54.817696 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:11:54.827244 systemd-resolved[2037]: Defaulting to hostname 'linux'. Sep 12 17:11:54.830742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:11:54.833539 systemd[1]: Reached target network.target - Network. Sep 12 17:11:54.835589 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:11:54.838067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:11:54.840769 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:11:54.843261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:11:54.846086 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:11:54.849284 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:11:54.851848 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:11:54.854626 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:11:54.857317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:11:54.857372 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:11:54.859627 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:11:54.863132 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:11:54.868669 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:11:54.872880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:11:54.877389 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:11:54.879864 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:11:54.882094 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:11:54.884514 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:11:54.884586 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:11:54.884634 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:11:54.888733 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:11:54.903822 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:11:54.910591 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:11:54.917740 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:11:54.925107 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:11:54.929693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:11:54.940680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:11:54.968764 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:11:54.985802 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:11:55.001840 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:11:55.014691 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:11:55.020630 jq[2098]: false Sep 12 17:11:55.027457 extend-filesystems[2099]: Found loop4 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found loop5 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found loop6 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found loop7 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found nvme0n1 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found nvme0n1p1 Sep 12 17:11:55.027457 extend-filesystems[2099]: Found nvme0n1p2 Sep 12 17:11:55.096329 extend-filesystems[2099]: Found nvme0n1p3 Sep 12 17:11:55.096329 extend-filesystems[2099]: Found usr Sep 12 17:11:55.096329 extend-filesystems[2099]: Found nvme0n1p4 Sep 12 17:11:55.096329 extend-filesystems[2099]: Found nvme0n1p6 Sep 12 17:11:55.096329 extend-filesystems[2099]: Found nvme0n1p7 Sep 12 17:11:55.096329 extend-filesystems[2099]: Found nvme0n1p9 Sep 12 17:11:55.096329 extend-filesystems[2099]: Checking size of /dev/nvme0n1p9 Sep 12 17:11:55.050315 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:11:55.083844 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:11:55.131672 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:11:55.146776 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:11:55.152866 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:11:55.190272 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:11:55.212528 extend-filesystems[2099]: Resized partition /dev/nvme0n1p9 Sep 12 17:11:55.200718 dbus-daemon[2096]: [system] SELinux support is enabled Sep 12 17:11:55.215351 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:11:55.209935 dbus-daemon[2096]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1702 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:11:55.228975 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:11:55.238006 extend-filesystems[2137]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:11:55.249201 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:11:55.249756 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:11:55.258298 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:11:55.258848 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:11:55.265162 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:11:55.279638 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:11:55.286383 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:11:55.290565 jq[2136]: true Sep 12 17:11:55.286920 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:11:55.338953 coreos-metadata[2095]: Sep 12 17:11:55.338 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:11:55.361927 coreos-metadata[2095]: Sep 12 17:11:55.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:11:55.374273 ntpd[2103]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: ---------------------------------------------------- Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: corporation. Support and training for ntp-4 are Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: available at https://www.nwtime.org/support Sep 12 17:11:55.380096 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: ---------------------------------------------------- Sep 12 17:11:55.374894 ntpd[2103]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.382 INFO Fetch successful Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.382 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.387 INFO Fetch successful Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.387 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.397 INFO Fetch successful Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.397 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.403 INFO Fetch successful Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.407 INFO Fetch failed with 404: resource not found Sep 12 17:11:55.411132 coreos-metadata[2095]: Sep 12 17:11:55.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:11:55.413908 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: proto: precision = 0.096 usec (-23) Sep 12 17:11:55.413908 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: basedate set to 2025-08-31 Sep 12 17:11:55.413908 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: gps base set to 2025-08-31 (week 2382) Sep 12 17:11:55.374938 ntpd[2103]: ---------------------------------------------------- Sep 12 17:11:55.374959 ntpd[2103]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:11:55.418405 coreos-metadata[2095]: Sep 12 17:11:55.416 INFO Fetch successful Sep 12 17:11:55.418405 coreos-metadata[2095]: Sep 12 17:11:55.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:11:55.374980 ntpd[2103]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:11:55.375000 ntpd[2103]: corporation. Support and training for ntp-4 are Sep 12 17:11:55.375021 ntpd[2103]: available at https://www.nwtime.org/support Sep 12 17:11:55.375040 ntpd[2103]: ---------------------------------------------------- Sep 12 17:11:55.405540 ntpd[2103]: proto: precision = 0.096 usec (-23) Sep 12 17:11:55.406147 ntpd[2103]: basedate set to 2025-08-31 Sep 12 17:11:55.406175 ntpd[2103]: gps base set to 2025-08-31 (week 2382) Sep 12 17:11:55.421730 (ntainerd)[2157]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:11:55.424313 coreos-metadata[2095]: Sep 12 17:11:55.423 INFO Fetch successful Sep 12 17:11:55.424313 coreos-metadata[2095]: Sep 12 17:11:55.423 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:11:55.427313 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:11:55.441077 coreos-metadata[2095]: Sep 12 17:11:55.432 INFO Fetch successful Sep 12 17:11:55.441077 coreos-metadata[2095]: Sep 12 17:11:55.432 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:11:55.441193 jq[2149]: true Sep 12 17:11:55.427384 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen normally on 3 eth0 172.31.18.149:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen normally on 4 lo [::1]:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listen normally on 5 eth0 [fe80::417:62ff:fe3a:8df3%2]:123 Sep 12 17:11:55.452737 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Sep 12 17:11:55.480291 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:11:55.443675 ntpd[2103]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:11:55.430463 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:11:55.481011 update_engine[2131]: I20250912 17:11:55.472465 2131 main.cc:92] Flatcar Update Engine starting Sep 12 17:11:55.486633 coreos-metadata[2095]: Sep 12 17:11:55.453 INFO Fetch successful Sep 12 17:11:55.486633 coreos-metadata[2095]: Sep 12 17:11:55.453 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:11:55.486633 coreos-metadata[2095]: Sep 12 17:11:55.471 INFO Fetch successful Sep 12 17:11:55.486820 tar[2142]: linux-arm64/helm Sep 12 17:11:55.443756 ntpd[2103]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:11:55.430518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:11:55.525460 extend-filesystems[2137]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:11:55.525460 extend-filesystems[2137]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:11:55.525460 extend-filesystems[2137]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:11:55.549421 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:55.549421 ntpd[2103]: 12 Sep 17:11:55 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:55.444369 ntpd[2103]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:11:55.494876 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:11:55.549815 update_engine[2131]: I20250912 17:11:55.530790 2131 update_check_scheduler.cc:74] Next update check in 6m18s Sep 12 17:11:55.549873 extend-filesystems[2099]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:11:55.447956 ntpd[2103]: Listen normally on 3 eth0 172.31.18.149:123 Sep 12 17:11:55.501990 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:11:55.448106 ntpd[2103]: Listen normally on 4 lo [::1]:123 Sep 12 17:11:55.505962 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:11:55.448184 ntpd[2103]: Listen normally on 5 eth0 [fe80::417:62ff:fe3a:8df3%2]:123 Sep 12 17:11:55.511077 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:11:55.448255 ntpd[2103]: Listening on routing socket on fd #22 for interface updates Sep 12 17:11:55.532805 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:11:55.448668 dbus-daemon[2096]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:11:55.533299 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:11:55.521573 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:55.521629 ntpd[2103]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:11:55.572841 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:11:55.580839 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:11:55.693301 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:11:55.696223 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:11:55.802128 bash[2208]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:11:55.809521 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:11:55.861116 systemd[1]: Starting sshkeys.service... Sep 12 17:11:55.884165 systemd-logind[2128]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:11:55.884220 systemd-logind[2128]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:11:55.886180 systemd-logind[2128]: New seat seat0. Sep 12 17:11:55.895106 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:11:55.918335 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:11:55.935631 amazon-ssm-agent[2185]: Initializing new seelog logger Sep 12 17:11:56.029249 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2215) Sep 12 17:11:56.024226 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: New Seelog Logger Creation Complete Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 processing appconfig overrides Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO Proxy environment variables: Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 processing appconfig overrides Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 processing appconfig overrides Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:11:56.029458 amazon-ssm-agent[2185]: 2025/09/12 17:11:55 processing appconfig overrides Sep 12 17:11:56.061130 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO no_proxy: Sep 12 17:11:56.053607 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:11:56.053300 dbus-daemon[2096]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:11:56.066680 dbus-daemon[2096]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2173 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:11:56.073301 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:11:56.154598 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO https_proxy: Sep 12 17:11:56.229178 polkitd[2238]: Started polkitd version 121 Sep 12 17:11:56.247165 containerd[2157]: time="2025-09-12T17:11:56.246128148Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:11:56.264139 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO http_proxy: Sep 12 17:11:56.307141 polkitd[2238]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:11:56.307270 polkitd[2238]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:11:56.311685 polkitd[2238]: Finished loading, compiling and executing 2 rules Sep 12 17:11:56.331043 dbus-daemon[2096]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:11:56.331356 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:11:56.338755 polkitd[2238]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:11:56.349977 containerd[2157]: time="2025-09-12T17:11:56.348338976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.359250 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:11:56.366477 containerd[2157]: time="2025-09-12T17:11:56.366382740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:11:56.366477 containerd[2157]: time="2025-09-12T17:11:56.366463932Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:11:56.367033 containerd[2157]: time="2025-09-12T17:11:56.366860400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:11:56.367226 containerd[2157]: time="2025-09-12T17:11:56.367180908Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:11:56.367283 containerd[2157]: time="2025-09-12T17:11:56.367230228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367428 containerd[2157]: time="2025-09-12T17:11:56.367376232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367483 containerd[2157]: time="2025-09-12T17:11:56.367423416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367905 containerd[2157]: time="2025-09-12T17:11:56.367854924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367986 containerd[2157]: time="2025-09-12T17:11:56.367901424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367986 containerd[2157]: time="2025-09-12T17:11:56.367934400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:11:56.367986 containerd[2157]: time="2025-09-12T17:11:56.367959600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.368476 containerd[2157]: time="2025-09-12T17:11:56.368124912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.372858 containerd[2157]: time="2025-09-12T17:11:56.372635676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:11:56.373906 containerd[2157]: time="2025-09-12T17:11:56.373838964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:11:56.373906 containerd[2157]: time="2025-09-12T17:11:56.373899348Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:11:56.376901 containerd[2157]: time="2025-09-12T17:11:56.374150004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:11:56.376901 containerd[2157]: time="2025-09-12T17:11:56.374268948Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:11:56.387629 containerd[2157]: time="2025-09-12T17:11:56.387551904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:11:56.387810 containerd[2157]: time="2025-09-12T17:11:56.387683688Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:11:56.387864 containerd[2157]: time="2025-09-12T17:11:56.387803832Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:11:56.387864 containerd[2157]: time="2025-09-12T17:11:56.387848460Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:11:56.387950 containerd[2157]: time="2025-09-12T17:11:56.387899904Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.388157964Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.388779684Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.388959744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.388996548Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389028876Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389061840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389094408Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389124468Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389158692Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389198832Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389236752Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389267916Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389295252Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:11:56.389950 containerd[2157]: time="2025-09-12T17:11:56.389335320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.389366388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.389394756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.389425980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.389454852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392098368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392156904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392191152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392224176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392261976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392297724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392329416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392362536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392400852Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392450772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.393602 containerd[2157]: time="2025-09-12T17:11:56.392483640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392555280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392787696Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392827956Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392861856Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392891652Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392921304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392952804Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.392979528Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:11:56.394319 containerd[2157]: time="2025-09-12T17:11:56.393020004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:11:56.400282 containerd[2157]: time="2025-09-12T17:11:56.397788912Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:11:56.400282 containerd[2157]: time="2025-09-12T17:11:56.397922868Z" level=info msg="Connect containerd service" Sep 12 17:11:56.400282 containerd[2157]: time="2025-09-12T17:11:56.397980912Z" level=info msg="using legacy CRI server" Sep 12 17:11:56.400282 containerd[2157]: time="2025-09-12T17:11:56.397998996Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:11:56.400282 containerd[2157]: time="2025-09-12T17:11:56.398181720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.406449720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407070144Z" level=info msg="Start subscribing containerd event" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407172240Z" level=info msg="Start recovering state" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407298072Z" level=info msg="Start event monitor" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407322420Z" level=info msg="Start snapshots syncer" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407345244Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:11:56.410765 containerd[2157]: time="2025-09-12T17:11:56.407364636Z" level=info msg="Start streaming server" Sep 12 17:11:56.409829 locksmithd[2177]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:11:56.424303 containerd[2157]: time="2025-09-12T17:11:56.413592072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:11:56.424303 containerd[2157]: time="2025-09-12T17:11:56.413845116Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:11:56.424303 containerd[2157]: time="2025-09-12T17:11:56.422725333Z" level=info msg="containerd successfully booted in 0.183210s" Sep 12 17:11:56.416769 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:11:56.427073 systemd-hostnamed[2173]: Hostname set to (transient) Sep 12 17:11:56.427254 systemd-resolved[2037]: System hostname changed to 'ip-172-31-18-149'. Sep 12 17:11:56.466778 amazon-ssm-agent[2185]: 2025-09-12 17:11:55 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:11:56.478771 coreos-metadata[2226]: Sep 12 17:11:56.478 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:11:56.483728 coreos-metadata[2226]: Sep 12 17:11:56.480 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:11:56.487531 coreos-metadata[2226]: Sep 12 17:11:56.486 INFO Fetch successful Sep 12 17:11:56.487531 coreos-metadata[2226]: Sep 12 17:11:56.486 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:11:56.489532 coreos-metadata[2226]: Sep 12 17:11:56.488 INFO Fetch successful Sep 12 17:11:56.492391 unknown[2226]: wrote ssh authorized keys file for user: core Sep 12 17:11:56.547592 update-ssh-keys[2311]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:11:56.553224 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:11:56.569878 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO Agent will take identity from EC2 Sep 12 17:11:56.573259 systemd[1]: Finished sshkeys.service. Sep 12 17:11:56.666094 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:11:56.765411 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:11:56.864795 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:11:56.963989 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:11:57.039610 sshd_keygen[2147]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:11:57.064315 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:11:57.135104 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:11:57.150274 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:11:57.164588 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:11:57.197264 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:11:57.197869 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:11:57.212223 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:11:57.243981 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:11:57.260168 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:11:57.265775 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:11:57.275686 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:11:57.280624 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:11:57.366511 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [Registrar] Starting registrar module Sep 12 17:11:57.466742 amazon-ssm-agent[2185]: 2025-09-12 17:11:56 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:11:57.509826 amazon-ssm-agent[2185]: 2025-09-12 17:11:57 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:11:57.509826 amazon-ssm-agent[2185]: 2025-09-12 17:11:57 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:11:57.510812 amazon-ssm-agent[2185]: 2025-09-12 17:11:57 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:11:57.510812 amazon-ssm-agent[2185]: 2025-09-12 17:11:57 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:11:57.567875 amazon-ssm-agent[2185]: 2025-09-12 17:11:57 INFO [CredentialRefresher] Next credential rotation will be in 31.391658631366667 minutes Sep 12 17:11:57.622167 tar[2142]: linux-arm64/LICENSE Sep 12 17:11:57.622167 tar[2142]: linux-arm64/README.md Sep 12 17:11:57.655951 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:11:58.149014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:11:58.152669 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:11:58.155820 systemd[1]: Startup finished in 10.250s (kernel) + 10.340s (userspace) = 20.590s. Sep 12 17:11:58.165904 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:11:58.539076 amazon-ssm-agent[2185]: 2025-09-12 17:11:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:11:58.640511 amazon-ssm-agent[2185]: 2025-09-12 17:11:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2400) started Sep 12 17:11:58.740233 amazon-ssm-agent[2185]: 2025-09-12 17:11:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:11:59.202285 kubelet[2390]: E0912 17:11:59.202196 2390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:11:59.207176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:11:59.207673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:01.945588 systemd-resolved[2037]: Clock change detected. Flushing caches. Sep 12 17:12:03.489439 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:12:03.501450 systemd[1]: Started sshd@0-172.31.18.149:22-147.75.109.163:58098.service - OpenSSH per-connection server daemon (147.75.109.163:58098). Sep 12 17:12:03.676265 sshd[2413]: Accepted publickey for core from 147.75.109.163 port 58098 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:03.680523 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:03.696281 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:12:03.703434 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:12:03.707648 systemd-logind[2128]: New session 1 of user core. Sep 12 17:12:03.735626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:12:03.749545 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:12:03.759493 (systemd)[2419]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:12:03.991194 systemd[2419]: Queued start job for default target default.target. Sep 12 17:12:03.992569 systemd[2419]: Created slice app.slice - User Application Slice. Sep 12 17:12:03.992620 systemd[2419]: Reached target paths.target - Paths. Sep 12 17:12:03.992652 systemd[2419]: Reached target timers.target - Timers. Sep 12 17:12:04.001163 systemd[2419]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:12:04.017333 systemd[2419]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:12:04.017609 systemd[2419]: Reached target sockets.target - Sockets. Sep 12 17:12:04.017760 systemd[2419]: Reached target basic.target - Basic System. Sep 12 17:12:04.018196 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:12:04.019012 systemd[2419]: Reached target default.target - Main User Target. Sep 12 17:12:04.019108 systemd[2419]: Startup finished in 247ms. Sep 12 17:12:04.030508 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:12:04.179472 systemd[1]: Started sshd@1-172.31.18.149:22-147.75.109.163:58114.service - OpenSSH per-connection server daemon (147.75.109.163:58114). Sep 12 17:12:04.365495 sshd[2431]: Accepted publickey for core from 147.75.109.163 port 58114 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:04.368173 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:04.377153 systemd-logind[2128]: New session 2 of user core. Sep 12 17:12:04.384555 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:12:04.514309 sshd[2431]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:04.520406 systemd-logind[2128]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:12:04.521651 systemd[1]: sshd@1-172.31.18.149:22-147.75.109.163:58114.service: Deactivated successfully. Sep 12 17:12:04.527541 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:12:04.529373 systemd-logind[2128]: Removed session 2. Sep 12 17:12:04.549458 systemd[1]: Started sshd@2-172.31.18.149:22-147.75.109.163:58116.service - OpenSSH per-connection server daemon (147.75.109.163:58116). Sep 12 17:12:04.714394 sshd[2439]: Accepted publickey for core from 147.75.109.163 port 58116 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:04.716904 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:04.725920 systemd-logind[2128]: New session 3 of user core. Sep 12 17:12:04.732523 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:12:04.854347 sshd[2439]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:04.859753 systemd[1]: sshd@2-172.31.18.149:22-147.75.109.163:58116.service: Deactivated successfully. Sep 12 17:12:04.867173 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:12:04.867263 systemd-logind[2128]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:12:04.869852 systemd-logind[2128]: Removed session 3. Sep 12 17:12:04.881470 systemd[1]: Started sshd@3-172.31.18.149:22-147.75.109.163:58122.service - OpenSSH per-connection server daemon (147.75.109.163:58122). Sep 12 17:12:05.057208 sshd[2447]: Accepted publickey for core from 147.75.109.163 port 58122 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:05.059552 sshd[2447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.068444 systemd-logind[2128]: New session 4 of user core. Sep 12 17:12:05.076452 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:12:05.206342 sshd[2447]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:05.213890 systemd[1]: sshd@3-172.31.18.149:22-147.75.109.163:58122.service: Deactivated successfully. Sep 12 17:12:05.218653 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:12:05.221333 systemd-logind[2128]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:12:05.223218 systemd-logind[2128]: Removed session 4. Sep 12 17:12:05.240411 systemd[1]: Started sshd@4-172.31.18.149:22-147.75.109.163:58126.service - OpenSSH per-connection server daemon (147.75.109.163:58126). Sep 12 17:12:05.403128 sshd[2455]: Accepted publickey for core from 147.75.109.163 port 58126 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:05.405164 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.414865 systemd-logind[2128]: New session 5 of user core. Sep 12 17:12:05.425510 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:12:05.578419 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:12:05.579126 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:05.607710 sudo[2459]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:05.632351 sshd[2455]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:05.638652 systemd-logind[2128]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:12:05.641577 systemd[1]: sshd@4-172.31.18.149:22-147.75.109.163:58126.service: Deactivated successfully. Sep 12 17:12:05.646832 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:12:05.648663 systemd-logind[2128]: Removed session 5. Sep 12 17:12:05.661516 systemd[1]: Started sshd@5-172.31.18.149:22-147.75.109.163:58130.service - OpenSSH per-connection server daemon (147.75.109.163:58130). Sep 12 17:12:05.839961 sshd[2464]: Accepted publickey for core from 147.75.109.163 port 58130 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:05.842615 sshd[2464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.850247 systemd-logind[2128]: New session 6 of user core. Sep 12 17:12:05.859433 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:12:05.967370 sudo[2469]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:12:05.968737 sudo[2469]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:05.974833 sudo[2469]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:05.984895 sudo[2468]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:12:05.985573 sudo[2468]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:06.010517 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:12:06.016548 auditctl[2472]: No rules Sep 12 17:12:06.017532 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:12:06.018119 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:12:06.030732 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:12:06.076289 augenrules[2491]: No rules Sep 12 17:12:06.080151 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:12:06.084279 sudo[2468]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:06.108310 sshd[2464]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:06.114952 systemd[1]: sshd@5-172.31.18.149:22-147.75.109.163:58130.service: Deactivated successfully. Sep 12 17:12:06.120062 systemd-logind[2128]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:12:06.120253 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:12:06.123453 systemd-logind[2128]: Removed session 6. Sep 12 17:12:06.139527 systemd[1]: Started sshd@6-172.31.18.149:22-147.75.109.163:58132.service - OpenSSH per-connection server daemon (147.75.109.163:58132). Sep 12 17:12:06.312363 sshd[2500]: Accepted publickey for core from 147.75.109.163 port 58132 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:06.315672 sshd[2500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:06.324167 systemd-logind[2128]: New session 7 of user core. Sep 12 17:12:06.331437 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:12:06.438537 sudo[2504]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:12:06.439203 sudo[2504]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:12:07.100435 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:12:07.100990 (dockerd)[2519]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:12:07.614677 dockerd[2519]: time="2025-09-12T17:12:07.614604324Z" level=info msg="Starting up" Sep 12 17:12:08.054179 dockerd[2519]: time="2025-09-12T17:12:08.054114682Z" level=info msg="Loading containers: start." Sep 12 17:12:08.262031 kernel: Initializing XFRM netlink socket Sep 12 17:12:08.326942 (udev-worker)[2543]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:12:08.422346 systemd-networkd[1702]: docker0: Link UP Sep 12 17:12:08.449443 dockerd[2519]: time="2025-09-12T17:12:08.449368356Z" level=info msg="Loading containers: done." Sep 12 17:12:08.475045 dockerd[2519]: time="2025-09-12T17:12:08.474576624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:12:08.475045 dockerd[2519]: time="2025-09-12T17:12:08.474736776Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:12:08.475045 dockerd[2519]: time="2025-09-12T17:12:08.474919416Z" level=info msg="Daemon has completed initialization" Sep 12 17:12:08.477870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2782057770-merged.mount: Deactivated successfully. Sep 12 17:12:08.526676 dockerd[2519]: time="2025-09-12T17:12:08.526461444Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:12:08.526847 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:12:08.906041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:12:08.920340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:09.346407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:09.360353 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:09.463243 kubelet[2673]: E0912 17:12:09.463168 2673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:09.472685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:09.475269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:09.834820 containerd[2157]: time="2025-09-12T17:12:09.834751767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:12:10.454447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119725583.mount: Deactivated successfully. Sep 12 17:12:11.728059 containerd[2157]: time="2025-09-12T17:12:11.727670296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:11.730929 containerd[2157]: time="2025-09-12T17:12:11.730861864Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 17:12:11.733301 containerd[2157]: time="2025-09-12T17:12:11.733233280Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:11.742001 containerd[2157]: time="2025-09-12T17:12:11.740083540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:11.742604 containerd[2157]: time="2025-09-12T17:12:11.742553584Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.907738805s" Sep 12 17:12:11.742743 containerd[2157]: time="2025-09-12T17:12:11.742712824Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:12:11.745417 containerd[2157]: time="2025-09-12T17:12:11.745348696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:12:13.113486 containerd[2157]: time="2025-09-12T17:12:13.113421963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:13.115614 containerd[2157]: time="2025-09-12T17:12:13.115558779Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 17:12:13.116897 containerd[2157]: time="2025-09-12T17:12:13.116040939Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:13.124439 containerd[2157]: time="2025-09-12T17:12:13.124373499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:13.129510 containerd[2157]: time="2025-09-12T17:12:13.129434151Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.384017079s" Sep 12 17:12:13.129510 containerd[2157]: time="2025-09-12T17:12:13.129505251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:12:13.131306 containerd[2157]: time="2025-09-12T17:12:13.131254683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:12:14.302524 containerd[2157]: time="2025-09-12T17:12:14.302457485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:14.304260 containerd[2157]: time="2025-09-12T17:12:14.304046897Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 17:12:14.307019 containerd[2157]: time="2025-09-12T17:12:14.305606465Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:14.311746 containerd[2157]: time="2025-09-12T17:12:14.311680289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:14.318675 containerd[2157]: time="2025-09-12T17:12:14.318583829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.187257458s" Sep 12 17:12:14.318675 containerd[2157]: time="2025-09-12T17:12:14.318665009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:12:14.320876 containerd[2157]: time="2025-09-12T17:12:14.320811029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:12:15.558890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857078635.mount: Deactivated successfully. Sep 12 17:12:16.109690 containerd[2157]: time="2025-09-12T17:12:16.109598898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:16.111820 containerd[2157]: time="2025-09-12T17:12:16.111494598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 17:12:16.113005 containerd[2157]: time="2025-09-12T17:12:16.112933086Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:16.118026 containerd[2157]: time="2025-09-12T17:12:16.117132978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:16.122458 containerd[2157]: time="2025-09-12T17:12:16.122385114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.801497525s" Sep 12 17:12:16.122677 containerd[2157]: time="2025-09-12T17:12:16.122644086Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:12:16.125532 containerd[2157]: time="2025-09-12T17:12:16.125466414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:12:16.664512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762946941.mount: Deactivated successfully. Sep 12 17:12:18.037304 containerd[2157]: time="2025-09-12T17:12:18.037237075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.040485 containerd[2157]: time="2025-09-12T17:12:18.040412035Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:12:18.042459 containerd[2157]: time="2025-09-12T17:12:18.042357331Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.050302 containerd[2157]: time="2025-09-12T17:12:18.049193383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.052101 containerd[2157]: time="2025-09-12T17:12:18.052028360Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.926166234s" Sep 12 17:12:18.052101 containerd[2157]: time="2025-09-12T17:12:18.052098344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:12:18.053129 containerd[2157]: time="2025-09-12T17:12:18.052887080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:12:18.648122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882564158.mount: Deactivated successfully. Sep 12 17:12:18.662046 containerd[2157]: time="2025-09-12T17:12:18.660954599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.663052 containerd[2157]: time="2025-09-12T17:12:18.662946815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:12:18.665635 containerd[2157]: time="2025-09-12T17:12:18.665542535Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.673277 containerd[2157]: time="2025-09-12T17:12:18.670932407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:18.673277 containerd[2157]: time="2025-09-12T17:12:18.672621611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 619.666563ms" Sep 12 17:12:18.673277 containerd[2157]: time="2025-09-12T17:12:18.672678815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:12:18.674048 containerd[2157]: time="2025-09-12T17:12:18.673844975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:12:19.248038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204644979.mount: Deactivated successfully. Sep 12 17:12:19.656011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:12:19.667312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:21.528377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:21.545585 (kubelet)[2848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:12:21.656292 kubelet[2848]: E0912 17:12:21.656233 2848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:12:21.660637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:12:21.662360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:12:23.092031 containerd[2157]: time="2025-09-12T17:12:23.091892161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.094344 containerd[2157]: time="2025-09-12T17:12:23.094264813Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 17:12:23.096704 containerd[2157]: time="2025-09-12T17:12:23.096610177Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.104662 containerd[2157]: time="2025-09-12T17:12:23.104607949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:23.108710 containerd[2157]: time="2025-09-12T17:12:23.108498877Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.434289894s" Sep 12 17:12:23.108710 containerd[2157]: time="2025-09-12T17:12:23.108559993Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:12:26.035516 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:12:28.974439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:28.988428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:29.049128 systemd[1]: Reloading requested from client PID 2918 ('systemctl') (unit session-7.scope)... Sep 12 17:12:29.049160 systemd[1]: Reloading... Sep 12 17:12:29.259026 zram_generator::config[2961]: No configuration found. Sep 12 17:12:29.525616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:12:29.697225 systemd[1]: Reloading finished in 647 ms. Sep 12 17:12:29.782406 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:12:29.782624 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:12:29.783637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:29.795591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:30.121312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:30.138674 (kubelet)[3033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:12:30.219474 kubelet[3033]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:30.219474 kubelet[3033]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:12:30.219474 kubelet[3033]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:30.220158 kubelet[3033]: I0912 17:12:30.219647 3033 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:12:33.091013 kubelet[3033]: I0912 17:12:33.089776 3033 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:12:33.091013 kubelet[3033]: I0912 17:12:33.089830 3033 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:12:33.091013 kubelet[3033]: I0912 17:12:33.090273 3033 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:12:33.150711 kubelet[3033]: E0912 17:12:33.150654 3033 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:33.152288 kubelet[3033]: I0912 17:12:33.152235 3033 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:12:33.163553 kubelet[3033]: E0912 17:12:33.163507 3033 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:12:33.163822 kubelet[3033]: I0912 17:12:33.163778 3033 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:12:33.170996 kubelet[3033]: I0912 17:12:33.170921 3033 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:12:33.173206 kubelet[3033]: I0912 17:12:33.173174 3033 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:12:33.173623 kubelet[3033]: I0912 17:12:33.173577 3033 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:12:33.174022 kubelet[3033]: I0912 17:12:33.173727 3033 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:12:33.174472 kubelet[3033]: I0912 17:12:33.174449 3033 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:12:33.175352 kubelet[3033]: I0912 17:12:33.174555 3033 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:12:33.175352 kubelet[3033]: I0912 17:12:33.175015 3033 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:33.180329 kubelet[3033]: I0912 17:12:33.180298 3033 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:12:33.181188 kubelet[3033]: I0912 17:12:33.181152 3033 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:12:33.181316 kubelet[3033]: I0912 17:12:33.181299 3033 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:12:33.181591 kubelet[3033]: I0912 17:12:33.181570 3033 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:12:33.191263 kubelet[3033]: W0912 17:12:33.191135 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-149&limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:33.191263 kubelet[3033]: E0912 17:12:33.191251 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-149&limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:33.193841 kubelet[3033]: W0912 17:12:33.193727 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:33.193841 kubelet[3033]: E0912 17:12:33.193829 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:33.194339 kubelet[3033]: I0912 17:12:33.194300 3033 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:12:33.195721 kubelet[3033]: I0912 17:12:33.195675 3033 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:12:33.197618 kubelet[3033]: W0912 17:12:33.196065 3033 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:12:33.200238 kubelet[3033]: I0912 17:12:33.199088 3033 server.go:1274] "Started kubelet" Sep 12 17:12:33.200238 kubelet[3033]: I0912 17:12:33.199586 3033 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:12:33.202166 kubelet[3033]: I0912 17:12:33.202125 3033 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:12:33.204952 kubelet[3033]: I0912 17:12:33.204860 3033 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:12:33.205384 kubelet[3033]: I0912 17:12:33.205337 3033 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:12:33.207770 kubelet[3033]: E0912 17:12:33.205628 3033 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.149:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-149.1864983f87250e6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-149,UID:ip-172-31-18-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-149,},FirstTimestamp:2025-09-12 17:12:33.199050347 +0000 UTC m=+3.054120472,LastTimestamp:2025-09-12 17:12:33.199050347 +0000 UTC m=+3.054120472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-149,}" Sep 12 17:12:33.212198 kubelet[3033]: I0912 17:12:33.212144 3033 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:12:33.213285 kubelet[3033]: E0912 17:12:33.212484 3033 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:12:33.213285 kubelet[3033]: I0912 17:12:33.212748 3033 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:12:33.219648 kubelet[3033]: I0912 17:12:33.219612 3033 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:12:33.220099 kubelet[3033]: I0912 17:12:33.220073 3033 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:12:33.220277 kubelet[3033]: I0912 17:12:33.220258 3033 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:12:33.221475 kubelet[3033]: W0912 17:12:33.221367 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:33.221475 kubelet[3033]: E0912 17:12:33.221560 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:33.223108 kubelet[3033]: I0912 17:12:33.222144 3033 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:12:33.223108 kubelet[3033]: I0912 17:12:33.222308 3033 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:12:33.225927 kubelet[3033]: E0912 17:12:33.225874 3033 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-149\" not found" Sep 12 17:12:33.228792 kubelet[3033]: I0912 17:12:33.228759 3033 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:12:33.239913 kubelet[3033]: E0912 17:12:33.239833 3033 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-149?timeout=10s\": dial tcp 172.31.18.149:6443: connect: connection refused" interval="200ms" Sep 12 17:12:33.260656 kubelet[3033]: I0912 17:12:33.260184 3033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:12:33.264006 kubelet[3033]: I0912 17:12:33.262464 3033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:12:33.264006 kubelet[3033]: I0912 17:12:33.262516 3033 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:12:33.264006 kubelet[3033]: I0912 17:12:33.262550 3033 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:12:33.264006 kubelet[3033]: E0912 17:12:33.262617 3033 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:12:33.277196 kubelet[3033]: W0912 17:12:33.277106 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:33.277321 kubelet[3033]: E0912 17:12:33.277209 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:33.284407 kubelet[3033]: I0912 17:12:33.284367 3033 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:12:33.284407 kubelet[3033]: I0912 17:12:33.284400 3033 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:12:33.284613 kubelet[3033]: I0912 17:12:33.284433 3033 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:33.288159 kubelet[3033]: I0912 17:12:33.288104 3033 policy_none.go:49] "None policy: Start" Sep 12 17:12:33.289173 kubelet[3033]: I0912 17:12:33.289139 3033 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:12:33.289268 kubelet[3033]: I0912 17:12:33.289185 3033 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:12:33.302008 kubelet[3033]: I0912 17:12:33.301090 3033 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:12:33.302008 kubelet[3033]: I0912 17:12:33.301383 3033 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:12:33.302008 kubelet[3033]: I0912 17:12:33.301403 3033 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:12:33.304852 kubelet[3033]: I0912 17:12:33.304790 3033 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:12:33.310541 kubelet[3033]: E0912 17:12:33.310422 3033 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-149\" not found" Sep 12 17:12:33.404950 kubelet[3033]: I0912 17:12:33.404398 3033 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:33.405484 kubelet[3033]: E0912 17:12:33.405244 3033 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.149:6443/api/v1/nodes\": dial tcp 172.31.18.149:6443: connect: connection refused" node="ip-172-31-18-149" Sep 12 17:12:33.421224 kubelet[3033]: I0912 17:12:33.421164 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:33.421343 kubelet[3033]: I0912 17:12:33.421226 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:33.421343 kubelet[3033]: I0912 17:12:33.421273 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:33.421343 kubelet[3033]: I0912 17:12:33.421328 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:33.421536 kubelet[3033]: I0912 17:12:33.421363 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:33.421536 kubelet[3033]: I0912 17:12:33.421397 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:33.421536 kubelet[3033]: I0912 17:12:33.421432 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:33.421536 kubelet[3033]: I0912 17:12:33.421470 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:33.421536 kubelet[3033]: I0912 17:12:33.421508 3033 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dda0f06a5bd11459c7b95ca6955bbee1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-149\" (UID: \"dda0f06a5bd11459c7b95ca6955bbee1\") " pod="kube-system/kube-scheduler-ip-172-31-18-149" Sep 12 17:12:33.440476 kubelet[3033]: E0912 17:12:33.440403 3033 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-149?timeout=10s\": dial tcp 172.31.18.149:6443: connect: connection refused" interval="400ms" Sep 12 17:12:33.607371 kubelet[3033]: I0912 17:12:33.607317 3033 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:33.608006 kubelet[3033]: E0912 17:12:33.607927 3033 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.149:6443/api/v1/nodes\": dial tcp 172.31.18.149:6443: connect: connection refused" node="ip-172-31-18-149" Sep 12 17:12:33.676221 containerd[2157]: time="2025-09-12T17:12:33.675642817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-149,Uid:084a685c591c8b58266fb3bc35b2f9d2,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:33.683392 containerd[2157]: time="2025-09-12T17:12:33.683323045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-149,Uid:9b745265515a670e0de5e9a7a443c0a5,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:33.684477 containerd[2157]: time="2025-09-12T17:12:33.684139657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-149,Uid:dda0f06a5bd11459c7b95ca6955bbee1,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:33.841679 kubelet[3033]: E0912 17:12:33.841608 3033 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-149?timeout=10s\": dial tcp 172.31.18.149:6443: connect: connection refused" interval="800ms" Sep 12 17:12:34.010562 kubelet[3033]: I0912 17:12:34.010422 3033 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:34.010913 kubelet[3033]: E0912 17:12:34.010873 3033 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.149:6443/api/v1/nodes\": dial tcp 172.31.18.149:6443: connect: connection refused" node="ip-172-31-18-149" Sep 12 17:12:34.062078 kubelet[3033]: W0912 17:12:34.061952 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:34.062240 kubelet[3033]: E0912 17:12:34.062087 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:34.180775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911784053.mount: Deactivated successfully. Sep 12 17:12:34.198020 containerd[2157]: time="2025-09-12T17:12:34.197073756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:34.199360 containerd[2157]: time="2025-09-12T17:12:34.199287420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:34.201641 containerd[2157]: time="2025-09-12T17:12:34.201535740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:12:34.203581 containerd[2157]: time="2025-09-12T17:12:34.203530560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:12:34.205704 containerd[2157]: time="2025-09-12T17:12:34.205636512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:34.209004 containerd[2157]: time="2025-09-12T17:12:34.208441896Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:34.210081 containerd[2157]: time="2025-09-12T17:12:34.210021756Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:12:34.214744 containerd[2157]: time="2025-09-12T17:12:34.214674552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:12:34.218986 containerd[2157]: time="2025-09-12T17:12:34.218910168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.472043ms" Sep 12 17:12:34.223418 containerd[2157]: time="2025-09-12T17:12:34.223364364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.613943ms" Sep 12 17:12:34.227653 containerd[2157]: time="2025-09-12T17:12:34.227574456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.333795ms" Sep 12 17:12:34.303668 kubelet[3033]: W0912 17:12:34.303083 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-149&limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:34.303668 kubelet[3033]: E0912 17:12:34.303352 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-149&limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:34.316700 kubelet[3033]: W0912 17:12:34.316584 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:34.316700 kubelet[3033]: E0912 17:12:34.316651 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:34.389690 kubelet[3033]: W0912 17:12:34.389589 3033 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.149:6443: connect: connection refused Sep 12 17:12:34.389880 kubelet[3033]: E0912 17:12:34.389703 3033 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.149:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:12:34.532589 containerd[2157]: time="2025-09-12T17:12:34.532452949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:34.533103 containerd[2157]: time="2025-09-12T17:12:34.533054725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:34.533604 containerd[2157]: time="2025-09-12T17:12:34.533296609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.536695 containerd[2157]: time="2025-09-12T17:12:34.536170741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.539103 containerd[2157]: time="2025-09-12T17:12:34.538718137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:34.540513 containerd[2157]: time="2025-09-12T17:12:34.540105997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:34.542865 containerd[2157]: time="2025-09-12T17:12:34.541792705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.542865 containerd[2157]: time="2025-09-12T17:12:34.541305025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:34.542865 containerd[2157]: time="2025-09-12T17:12:34.541412737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:34.542865 containerd[2157]: time="2025-09-12T17:12:34.541450573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.542865 containerd[2157]: time="2025-09-12T17:12:34.541649857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.543920 containerd[2157]: time="2025-09-12T17:12:34.542696989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:34.643242 kubelet[3033]: E0912 17:12:34.643088 3033 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-149?timeout=10s\": dial tcp 172.31.18.149:6443: connect: connection refused" interval="1.6s" Sep 12 17:12:34.714147 containerd[2157]: time="2025-09-12T17:12:34.714087326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-149,Uid:dda0f06a5bd11459c7b95ca6955bbee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"90ed9765786c6fe7491c580bbab33dafb4c7aaf6ce35d88d53fca32355043a5d\"" Sep 12 17:12:34.716961 containerd[2157]: time="2025-09-12T17:12:34.716347730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-149,Uid:9b745265515a670e0de5e9a7a443c0a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"689dd00779522d26d5e7e7f870dcea370368d9cfea639f87d8228db919c59371\"" Sep 12 17:12:34.719568 containerd[2157]: time="2025-09-12T17:12:34.719344874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-149,Uid:084a685c591c8b58266fb3bc35b2f9d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e3f932120555497478767a1f1a2740dcf4ed5e8fc242d558d66f9495ce506ff\"" Sep 12 17:12:34.729173 containerd[2157]: time="2025-09-12T17:12:34.728934530Z" level=info msg="CreateContainer within sandbox \"90ed9765786c6fe7491c580bbab33dafb4c7aaf6ce35d88d53fca32355043a5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:12:34.730248 containerd[2157]: time="2025-09-12T17:12:34.730193930Z" level=info msg="CreateContainer within sandbox \"689dd00779522d26d5e7e7f870dcea370368d9cfea639f87d8228db919c59371\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:12:34.734449 containerd[2157]: time="2025-09-12T17:12:34.734179358Z" level=info msg="CreateContainer within sandbox \"8e3f932120555497478767a1f1a2740dcf4ed5e8fc242d558d66f9495ce506ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:12:34.785073 containerd[2157]: time="2025-09-12T17:12:34.784930095Z" level=info msg="CreateContainer within sandbox \"689dd00779522d26d5e7e7f870dcea370368d9cfea639f87d8228db919c59371\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33\"" Sep 12 17:12:34.788224 containerd[2157]: time="2025-09-12T17:12:34.787744803Z" level=info msg="CreateContainer within sandbox \"8e3f932120555497478767a1f1a2740dcf4ed5e8fc242d558d66f9495ce506ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be54db9a170866438bfa27fc46f95d909abf052027a8b5640296d704871fd4c3\"" Sep 12 17:12:34.788224 containerd[2157]: time="2025-09-12T17:12:34.788157627Z" level=info msg="StartContainer for \"617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33\"" Sep 12 17:12:34.791184 containerd[2157]: time="2025-09-12T17:12:34.790226487Z" level=info msg="StartContainer for \"be54db9a170866438bfa27fc46f95d909abf052027a8b5640296d704871fd4c3\"" Sep 12 17:12:34.814692 kubelet[3033]: I0912 17:12:34.814635 3033 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:34.815258 kubelet[3033]: E0912 17:12:34.815178 3033 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.149:6443/api/v1/nodes\": dial tcp 172.31.18.149:6443: connect: connection refused" node="ip-172-31-18-149" Sep 12 17:12:34.816701 containerd[2157]: time="2025-09-12T17:12:34.816590607Z" level=info msg="CreateContainer within sandbox \"90ed9765786c6fe7491c580bbab33dafb4c7aaf6ce35d88d53fca32355043a5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06\"" Sep 12 17:12:34.818828 containerd[2157]: time="2025-09-12T17:12:34.818759259Z" level=info msg="StartContainer for \"d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06\"" Sep 12 17:12:35.005032 containerd[2157]: time="2025-09-12T17:12:35.000613296Z" level=info msg="StartContainer for \"d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06\" returns successfully" Sep 12 17:12:35.030486 containerd[2157]: time="2025-09-12T17:12:35.030412644Z" level=info msg="StartContainer for \"be54db9a170866438bfa27fc46f95d909abf052027a8b5640296d704871fd4c3\" returns successfully" Sep 12 17:12:35.053385 containerd[2157]: time="2025-09-12T17:12:35.053291784Z" level=info msg="StartContainer for \"617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33\" returns successfully" Sep 12 17:12:36.418822 kubelet[3033]: I0912 17:12:36.418777 3033 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:39.180550 kubelet[3033]: E0912 17:12:39.180460 3033 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-149\" not found" node="ip-172-31-18-149" Sep 12 17:12:39.198170 kubelet[3033]: I0912 17:12:39.197839 3033 apiserver.go:52] "Watching apiserver" Sep 12 17:12:39.220350 kubelet[3033]: I0912 17:12:39.220310 3033 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:12:39.242895 kubelet[3033]: E0912 17:12:39.242520 3033 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-149.1864983f87250e6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-149,UID:ip-172-31-18-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-149,},FirstTimestamp:2025-09-12 17:12:33.199050347 +0000 UTC m=+3.054120472,LastTimestamp:2025-09-12 17:12:33.199050347 +0000 UTC m=+3.054120472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-149,}" Sep 12 17:12:39.291091 kubelet[3033]: I0912 17:12:39.287615 3033 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-149" Sep 12 17:12:39.291091 kubelet[3033]: E0912 17:12:39.287674 3033 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-149\": node \"ip-172-31-18-149\" not found" Sep 12 17:12:39.300326 kubelet[3033]: E0912 17:12:39.300176 3033 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-149.1864983f87ed5a47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-149,UID:ip-172-31-18-149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-149,},FirstTimestamp:2025-09-12 17:12:33.212176967 +0000 UTC m=+3.067247128,LastTimestamp:2025-09-12 17:12:33.212176967 +0000 UTC m=+3.067247128,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-149,}" Sep 12 17:12:40.150030 update_engine[2131]: I20250912 17:12:40.149664 2131 update_attempter.cc:509] Updating boot flags... Sep 12 17:12:40.329028 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3318) Sep 12 17:12:40.937145 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3322) Sep 12 17:12:41.637950 systemd[1]: Reloading requested from client PID 3487 ('systemctl') (unit session-7.scope)... Sep 12 17:12:41.638012 systemd[1]: Reloading... Sep 12 17:12:41.824020 zram_generator::config[3530]: No configuration found. Sep 12 17:12:42.091170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:12:42.288077 systemd[1]: Reloading finished in 649 ms. Sep 12 17:12:42.355313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:42.375944 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:12:42.376887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:42.389484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:12:42.743447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:12:42.753016 (kubelet)[3597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:12:42.862156 kubelet[3597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:42.862156 kubelet[3597]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:12:42.862156 kubelet[3597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:12:42.862156 kubelet[3597]: I0912 17:12:42.861532 3597 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:12:42.878134 kubelet[3597]: I0912 17:12:42.877959 3597 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:12:42.879025 kubelet[3597]: I0912 17:12:42.878580 3597 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:12:42.879447 kubelet[3597]: I0912 17:12:42.879399 3597 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:12:42.882875 kubelet[3597]: I0912 17:12:42.882836 3597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:12:42.887328 kubelet[3597]: I0912 17:12:42.887288 3597 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:12:42.904698 kubelet[3597]: E0912 17:12:42.904614 3597 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:12:42.904870 kubelet[3597]: I0912 17:12:42.904846 3597 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:12:42.907219 sudo[3611]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:12:42.907948 sudo[3611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:12:42.913105 kubelet[3597]: I0912 17:12:42.912875 3597 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:12:42.915713 kubelet[3597]: I0912 17:12:42.915510 3597 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:12:42.917494 kubelet[3597]: I0912 17:12:42.916865 3597 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:12:42.917494 kubelet[3597]: I0912 17:12:42.916927 3597 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:12:42.917494 kubelet[3597]: I0912 17:12:42.917242 3597 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:12:42.917494 kubelet[3597]: I0912 17:12:42.917262 3597 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:12:42.917858 kubelet[3597]: I0912 17:12:42.917332 3597 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:42.919028 kubelet[3597]: I0912 17:12:42.918794 3597 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:12:42.919028 kubelet[3597]: I0912 17:12:42.918844 3597 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:12:42.919028 kubelet[3597]: I0912 17:12:42.918878 3597 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:12:42.919028 kubelet[3597]: I0912 17:12:42.918905 3597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:12:42.925990 kubelet[3597]: I0912 17:12:42.923994 3597 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:12:42.925990 kubelet[3597]: I0912 17:12:42.924733 3597 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:12:42.929990 kubelet[3597]: I0912 17:12:42.928496 3597 server.go:1274] "Started kubelet" Sep 12 17:12:42.945158 kubelet[3597]: I0912 17:12:42.944252 3597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:12:42.954138 kubelet[3597]: I0912 17:12:42.954104 3597 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:12:42.956553 kubelet[3597]: E0912 17:12:42.956293 3597 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-149\" not found" Sep 12 17:12:42.959083 kubelet[3597]: I0912 17:12:42.955874 3597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:12:42.959825 kubelet[3597]: I0912 17:12:42.959605 3597 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:12:42.961721 kubelet[3597]: I0912 17:12:42.956301 3597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:12:42.965241 kubelet[3597]: I0912 17:12:42.962656 3597 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:12:42.968667 kubelet[3597]: I0912 17:12:42.955800 3597 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:12:42.974994 kubelet[3597]: I0912 17:12:42.971601 3597 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:12:42.980935 kubelet[3597]: I0912 17:12:42.980859 3597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:12:42.984024 kubelet[3597]: I0912 17:12:42.983875 3597 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:12:42.990540 kubelet[3597]: I0912 17:12:42.990488 3597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:12:42.993122 kubelet[3597]: I0912 17:12:42.993071 3597 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:12:42.993275 kubelet[3597]: I0912 17:12:42.993136 3597 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:12:42.993275 kubelet[3597]: E0912 17:12:42.993222 3597 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:12:43.021472 kubelet[3597]: I0912 17:12:43.021243 3597 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:12:43.024615 kubelet[3597]: I0912 17:12:43.023216 3597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:12:43.044908 kubelet[3597]: E0912 17:12:43.044634 3597 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:12:43.047507 kubelet[3597]: I0912 17:12:43.047368 3597 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:12:43.093993 kubelet[3597]: E0912 17:12:43.093597 3597 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:12:43.207358 kubelet[3597]: I0912 17:12:43.207311 3597 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:12:43.208713 kubelet[3597]: I0912 17:12:43.208552 3597 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:12:43.209270 kubelet[3597]: I0912 17:12:43.209055 3597 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:12:43.209665 kubelet[3597]: I0912 17:12:43.209592 3597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:12:43.209827 kubelet[3597]: I0912 17:12:43.209622 3597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:12:43.209827 kubelet[3597]: I0912 17:12:43.209761 3597 policy_none.go:49] "None policy: Start" Sep 12 17:12:43.213421 kubelet[3597]: I0912 17:12:43.213202 3597 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:12:43.213421 kubelet[3597]: I0912 17:12:43.213266 3597 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:12:43.214027 kubelet[3597]: I0912 17:12:43.213816 3597 state_mem.go:75] "Updated machine memory state" Sep 12 17:12:43.220379 kubelet[3597]: I0912 17:12:43.220012 3597 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:12:43.220379 kubelet[3597]: I0912 17:12:43.220307 3597 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:12:43.221015 kubelet[3597]: I0912 17:12:43.220328 3597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:12:43.222586 kubelet[3597]: I0912 17:12:43.222554 3597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:12:43.348285 kubelet[3597]: I0912 17:12:43.347846 3597 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-149" Sep 12 17:12:43.372525 kubelet[3597]: I0912 17:12:43.371664 3597 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-149" Sep 12 17:12:43.372525 kubelet[3597]: I0912 17:12:43.371774 3597 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-149" Sep 12 17:12:43.387799 kubelet[3597]: I0912 17:12:43.386494 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:43.387799 kubelet[3597]: I0912 17:12:43.386555 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:43.387799 kubelet[3597]: I0912 17:12:43.386593 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dda0f06a5bd11459c7b95ca6955bbee1-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-149\" (UID: \"dda0f06a5bd11459c7b95ca6955bbee1\") " pod="kube-system/kube-scheduler-ip-172-31-18-149" Sep 12 17:12:43.387799 kubelet[3597]: I0912 17:12:43.386627 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:43.390534 kubelet[3597]: I0912 17:12:43.390053 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:43.390534 kubelet[3597]: I0912 17:12:43.390234 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:43.390534 kubelet[3597]: I0912 17:12:43.390368 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b745265515a670e0de5e9a7a443c0a5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-149\" (UID: \"9b745265515a670e0de5e9a7a443c0a5\") " pod="kube-system/kube-controller-manager-ip-172-31-18-149" Sep 12 17:12:43.390534 kubelet[3597]: I0912 17:12:43.390472 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:43.391327 kubelet[3597]: I0912 17:12:43.391122 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/084a685c591c8b58266fb3bc35b2f9d2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-149\" (UID: \"084a685c591c8b58266fb3bc35b2f9d2\") " pod="kube-system/kube-apiserver-ip-172-31-18-149" Sep 12 17:12:43.913891 sudo[3611]: pam_unix(sudo:session): session closed for user root Sep 12 17:12:43.933182 kubelet[3597]: I0912 17:12:43.932225 3597 apiserver.go:52] "Watching apiserver" Sep 12 17:12:43.967600 kubelet[3597]: I0912 17:12:43.967535 3597 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:12:44.180313 kubelet[3597]: I0912 17:12:44.176453 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-149" podStartSLOduration=1.176430201 podStartE2EDuration="1.176430201s" podCreationTimestamp="2025-09-12 17:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:44.154142793 +0000 UTC m=+1.392495788" watchObservedRunningTime="2025-09-12 17:12:44.176430201 +0000 UTC m=+1.414783172" Sep 12 17:12:44.201049 kubelet[3597]: I0912 17:12:44.199873 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-149" podStartSLOduration=1.199852617 podStartE2EDuration="1.199852617s" podCreationTimestamp="2025-09-12 17:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:44.198816681 +0000 UTC m=+1.437169760" watchObservedRunningTime="2025-09-12 17:12:44.199852617 +0000 UTC m=+1.438205612" Sep 12 17:12:44.201049 kubelet[3597]: I0912 17:12:44.200096 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-149" podStartSLOduration=1.200087661 podStartE2EDuration="1.200087661s" podCreationTimestamp="2025-09-12 17:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:44.176897853 +0000 UTC m=+1.415250848" watchObservedRunningTime="2025-09-12 17:12:44.200087661 +0000 UTC m=+1.438440644" Sep 12 17:12:45.964237 kubelet[3597]: I0912 17:12:45.964160 3597 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:12:45.967774 containerd[2157]: time="2025-09-12T17:12:45.967650422Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:12:45.968853 kubelet[3597]: I0912 17:12:45.968587 3597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:12:47.019305 kubelet[3597]: I0912 17:12:47.019227 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/766f04e6-8581-490f-84ed-4e041bd31b65-xtables-lock\") pod \"kube-proxy-94xwz\" (UID: \"766f04e6-8581-490f-84ed-4e041bd31b65\") " pod="kube-system/kube-proxy-94xwz" Sep 12 17:12:47.019922 kubelet[3597]: I0912 17:12:47.019304 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8fx6\" (UniqueName: \"kubernetes.io/projected/766f04e6-8581-490f-84ed-4e041bd31b65-kube-api-access-h8fx6\") pod \"kube-proxy-94xwz\" (UID: \"766f04e6-8581-490f-84ed-4e041bd31b65\") " pod="kube-system/kube-proxy-94xwz" Sep 12 17:12:47.019922 kubelet[3597]: I0912 17:12:47.019362 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/766f04e6-8581-490f-84ed-4e041bd31b65-kube-proxy\") pod \"kube-proxy-94xwz\" (UID: \"766f04e6-8581-490f-84ed-4e041bd31b65\") " pod="kube-system/kube-proxy-94xwz" Sep 12 17:12:47.019922 kubelet[3597]: I0912 17:12:47.019403 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/766f04e6-8581-490f-84ed-4e041bd31b65-lib-modules\") pod \"kube-proxy-94xwz\" (UID: \"766f04e6-8581-490f-84ed-4e041bd31b65\") " pod="kube-system/kube-proxy-94xwz" Sep 12 17:12:47.124022 kubelet[3597]: I0912 17:12:47.120670 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-bpf-maps\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124022 kubelet[3597]: I0912 17:12:47.120740 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-clustermesh-secrets\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124022 kubelet[3597]: I0912 17:12:47.120782 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-xtables-lock\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124022 kubelet[3597]: I0912 17:12:47.120847 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9rqr\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-kube-api-access-z9rqr\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124022 kubelet[3597]: I0912 17:12:47.120912 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-config-path\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.120989 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-kernel\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.121032 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-run\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.121067 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hostproc\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.121106 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hubble-tls\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.121146 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-net\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124386 kubelet[3597]: I0912 17:12:47.121185 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-cgroup\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124722 kubelet[3597]: I0912 17:12:47.121223 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cni-path\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124722 kubelet[3597]: I0912 17:12:47.121319 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-etc-cni-netd\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.124722 kubelet[3597]: I0912 17:12:47.121355 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-lib-modules\") pod \"cilium-n29x8\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " pod="kube-system/cilium-n29x8" Sep 12 17:12:47.222599 kubelet[3597]: I0912 17:12:47.222525 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kblt\" (UniqueName: \"kubernetes.io/projected/ae8fc710-083c-4afe-80d5-9141f3c31bc0-kube-api-access-2kblt\") pod \"cilium-operator-5d85765b45-wntvh\" (UID: \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\") " pod="kube-system/cilium-operator-5d85765b45-wntvh" Sep 12 17:12:47.222762 kubelet[3597]: I0912 17:12:47.222740 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae8fc710-083c-4afe-80d5-9141f3c31bc0-cilium-config-path\") pod \"cilium-operator-5d85765b45-wntvh\" (UID: \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\") " pod="kube-system/cilium-operator-5d85765b45-wntvh" Sep 12 17:12:47.287624 containerd[2157]: time="2025-09-12T17:12:47.287466529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94xwz,Uid:766f04e6-8581-490f-84ed-4e041bd31b65,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:47.332757 containerd[2157]: time="2025-09-12T17:12:47.330568633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n29x8,Uid:55f3bbb9-6a08-44ff-a504-27ba8b1d382e,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:47.357606 containerd[2157]: time="2025-09-12T17:12:47.356187985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:47.357606 containerd[2157]: time="2025-09-12T17:12:47.356305405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:47.357606 containerd[2157]: time="2025-09-12T17:12:47.356343805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.357606 containerd[2157]: time="2025-09-12T17:12:47.356545573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.422556 containerd[2157]: time="2025-09-12T17:12:47.422063365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:47.422889 containerd[2157]: time="2025-09-12T17:12:47.422683957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:47.424779 containerd[2157]: time="2025-09-12T17:12:47.424352929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.425860 containerd[2157]: time="2025-09-12T17:12:47.425627521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.434794 containerd[2157]: time="2025-09-12T17:12:47.433585765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wntvh,Uid:ae8fc710-083c-4afe-80d5-9141f3c31bc0,Namespace:kube-system,Attempt:0,}" Sep 12 17:12:47.514400 containerd[2157]: time="2025-09-12T17:12:47.514101938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-94xwz,Uid:766f04e6-8581-490f-84ed-4e041bd31b65,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa9c35962530bd76b693c5f2991cf604f2514d8f97d95c58371febf5557b6c8\"" Sep 12 17:12:47.528634 containerd[2157]: time="2025-09-12T17:12:47.528564566Z" level=info msg="CreateContainer within sandbox \"3aa9c35962530bd76b693c5f2991cf604f2514d8f97d95c58371febf5557b6c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:12:47.564059 containerd[2157]: time="2025-09-12T17:12:47.556760882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:12:47.564059 containerd[2157]: time="2025-09-12T17:12:47.557080862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:12:47.564059 containerd[2157]: time="2025-09-12T17:12:47.557148374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.564059 containerd[2157]: time="2025-09-12T17:12:47.559092674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:12:47.565615 containerd[2157]: time="2025-09-12T17:12:47.565524278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n29x8,Uid:55f3bbb9-6a08-44ff-a504-27ba8b1d382e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\"" Sep 12 17:12:47.575507 containerd[2157]: time="2025-09-12T17:12:47.574258226Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:12:47.591840 containerd[2157]: time="2025-09-12T17:12:47.591545078Z" level=info msg="CreateContainer within sandbox \"3aa9c35962530bd76b693c5f2991cf604f2514d8f97d95c58371febf5557b6c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"220851c66d8789f9caffb328655fc38907c82d798ab894118cfa250b8da1b89b\"" Sep 12 17:12:47.597425 containerd[2157]: time="2025-09-12T17:12:47.596321246Z" level=info msg="StartContainer for \"220851c66d8789f9caffb328655fc38907c82d798ab894118cfa250b8da1b89b\"" Sep 12 17:12:47.727206 containerd[2157]: time="2025-09-12T17:12:47.726810939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wntvh,Uid:ae8fc710-083c-4afe-80d5-9141f3c31bc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\"" Sep 12 17:12:47.772930 containerd[2157]: time="2025-09-12T17:12:47.770355675Z" level=info msg="StartContainer for \"220851c66d8789f9caffb328655fc38907c82d798ab894118cfa250b8da1b89b\" returns successfully" Sep 12 17:12:49.191438 kubelet[3597]: I0912 17:12:49.189729 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-94xwz" podStartSLOduration=3.189608054 podStartE2EDuration="3.189608054s" podCreationTimestamp="2025-09-12 17:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:12:48.155817433 +0000 UTC m=+5.394170440" watchObservedRunningTime="2025-09-12 17:12:49.189608054 +0000 UTC m=+6.427961037" Sep 12 17:12:52.642272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004369645.mount: Deactivated successfully. Sep 12 17:12:55.376276 containerd[2157]: time="2025-09-12T17:12:55.375723957Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:55.378846 containerd[2157]: time="2025-09-12T17:12:55.378338253Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:12:55.381941 containerd[2157]: time="2025-09-12T17:12:55.381820953Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:55.388429 containerd[2157]: time="2025-09-12T17:12:55.388345593Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.811760327s" Sep 12 17:12:55.388429 containerd[2157]: time="2025-09-12T17:12:55.388420725Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:12:55.390894 containerd[2157]: time="2025-09-12T17:12:55.390740613Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:12:55.396572 containerd[2157]: time="2025-09-12T17:12:55.396484365Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:12:55.425581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048988785.mount: Deactivated successfully. Sep 12 17:12:55.426672 containerd[2157]: time="2025-09-12T17:12:55.426592713Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\"" Sep 12 17:12:55.432477 containerd[2157]: time="2025-09-12T17:12:55.430555629Z" level=info msg="StartContainer for \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\"" Sep 12 17:12:55.550213 containerd[2157]: time="2025-09-12T17:12:55.550144786Z" level=info msg="StartContainer for \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\" returns successfully" Sep 12 17:12:56.415342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337-rootfs.mount: Deactivated successfully. Sep 12 17:12:56.843913 containerd[2157]: time="2025-09-12T17:12:56.843494496Z" level=info msg="shim disconnected" id=b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337 namespace=k8s.io Sep 12 17:12:56.843913 containerd[2157]: time="2025-09-12T17:12:56.843568788Z" level=warning msg="cleaning up after shim disconnected" id=b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337 namespace=k8s.io Sep 12 17:12:56.843913 containerd[2157]: time="2025-09-12T17:12:56.843588924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:57.215554 containerd[2157]: time="2025-09-12T17:12:57.215454730Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:12:57.302712 containerd[2157]: time="2025-09-12T17:12:57.302544310Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\"" Sep 12 17:12:57.306860 containerd[2157]: time="2025-09-12T17:12:57.305400922Z" level=info msg="StartContainer for \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\"" Sep 12 17:12:57.333039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923353231.mount: Deactivated successfully. Sep 12 17:12:57.463575 containerd[2157]: time="2025-09-12T17:12:57.463230407Z" level=info msg="StartContainer for \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\" returns successfully" Sep 12 17:12:57.484836 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:12:57.486882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:12:57.487077 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:12:57.497908 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:12:57.542125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:12:57.579293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e-rootfs.mount: Deactivated successfully. Sep 12 17:12:57.596743 containerd[2157]: time="2025-09-12T17:12:57.596597592Z" level=info msg="shim disconnected" id=440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e namespace=k8s.io Sep 12 17:12:57.596743 containerd[2157]: time="2025-09-12T17:12:57.596680176Z" level=warning msg="cleaning up after shim disconnected" id=440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e namespace=k8s.io Sep 12 17:12:57.596743 containerd[2157]: time="2025-09-12T17:12:57.596702100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:58.140003 containerd[2157]: time="2025-09-12T17:12:58.138680879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:58.141003 containerd[2157]: time="2025-09-12T17:12:58.140921147Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:12:58.143499 containerd[2157]: time="2025-09-12T17:12:58.143440115Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:58.147325 containerd[2157]: time="2025-09-12T17:12:58.147242879Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.756212826s" Sep 12 17:12:58.147481 containerd[2157]: time="2025-09-12T17:12:58.147331607Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:12:58.153557 containerd[2157]: time="2025-09-12T17:12:58.153498851Z" level=info msg="CreateContainer within sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:12:58.177064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398356513.mount: Deactivated successfully. Sep 12 17:12:58.180332 containerd[2157]: time="2025-09-12T17:12:58.180274823Z" level=info msg="CreateContainer within sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\"" Sep 12 17:12:58.181638 containerd[2157]: time="2025-09-12T17:12:58.181567967Z" level=info msg="StartContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\"" Sep 12 17:12:58.237292 containerd[2157]: time="2025-09-12T17:12:58.237222647Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:12:58.287354 containerd[2157]: time="2025-09-12T17:12:58.287256551Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\"" Sep 12 17:12:58.289737 containerd[2157]: time="2025-09-12T17:12:58.288703247Z" level=info msg="StartContainer for \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\"" Sep 12 17:12:58.337408 containerd[2157]: time="2025-09-12T17:12:58.337332336Z" level=info msg="StartContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" returns successfully" Sep 12 17:12:58.446197 containerd[2157]: time="2025-09-12T17:12:58.446025660Z" level=info msg="StartContainer for \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\" returns successfully" Sep 12 17:12:58.519180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c-rootfs.mount: Deactivated successfully. Sep 12 17:12:58.614857 containerd[2157]: time="2025-09-12T17:12:58.614745145Z" level=info msg="shim disconnected" id=844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c namespace=k8s.io Sep 12 17:12:58.614857 containerd[2157]: time="2025-09-12T17:12:58.614825005Z" level=warning msg="cleaning up after shim disconnected" id=844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c namespace=k8s.io Sep 12 17:12:58.614857 containerd[2157]: time="2025-09-12T17:12:58.614847205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:59.268166 containerd[2157]: time="2025-09-12T17:12:59.266269116Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:12:59.301646 containerd[2157]: time="2025-09-12T17:12:59.301394436Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\"" Sep 12 17:12:59.302470 containerd[2157]: time="2025-09-12T17:12:59.302426748Z" level=info msg="StartContainer for \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\"" Sep 12 17:12:59.436864 kubelet[3597]: I0912 17:12:59.436101 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wntvh" podStartSLOduration=2.019945465 podStartE2EDuration="12.436051021s" podCreationTimestamp="2025-09-12 17:12:47 +0000 UTC" firstStartedPulling="2025-09-12 17:12:47.732315483 +0000 UTC m=+4.970668466" lastFinishedPulling="2025-09-12 17:12:58.148421051 +0000 UTC m=+15.386774022" observedRunningTime="2025-09-12 17:12:59.345321625 +0000 UTC m=+16.583674632" watchObservedRunningTime="2025-09-12 17:12:59.436051021 +0000 UTC m=+16.674404136" Sep 12 17:12:59.619058 containerd[2157]: time="2025-09-12T17:12:59.618846494Z" level=info msg="StartContainer for \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\" returns successfully" Sep 12 17:12:59.702244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801-rootfs.mount: Deactivated successfully. Sep 12 17:12:59.710732 containerd[2157]: time="2025-09-12T17:12:59.710607686Z" level=info msg="shim disconnected" id=a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801 namespace=k8s.io Sep 12 17:12:59.710732 containerd[2157]: time="2025-09-12T17:12:59.710722682Z" level=warning msg="cleaning up after shim disconnected" id=a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801 namespace=k8s.io Sep 12 17:12:59.710732 containerd[2157]: time="2025-09-12T17:12:59.710747654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:00.270260 containerd[2157]: time="2025-09-12T17:13:00.270191821Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:13:00.310566 containerd[2157]: time="2025-09-12T17:13:00.310203181Z" level=info msg="CreateContainer within sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\"" Sep 12 17:13:00.311435 containerd[2157]: time="2025-09-12T17:13:00.311232253Z" level=info msg="StartContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\"" Sep 12 17:13:00.458725 containerd[2157]: time="2025-09-12T17:13:00.458508914Z" level=info msg="StartContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" returns successfully" Sep 12 17:13:00.773650 kubelet[3597]: I0912 17:13:00.773599 3597 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:13:00.931922 kubelet[3597]: I0912 17:13:00.931795 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ae6f41-dad9-4729-9295-108d09c0b438-config-volume\") pod \"coredns-7c65d6cfc9-vp4pp\" (UID: \"49ae6f41-dad9-4729-9295-108d09c0b438\") " pod="kube-system/coredns-7c65d6cfc9-vp4pp" Sep 12 17:13:00.931922 kubelet[3597]: I0912 17:13:00.931874 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4501662a-11ff-4afd-be74-fc997a4bfb28-config-volume\") pod \"coredns-7c65d6cfc9-xjlbk\" (UID: \"4501662a-11ff-4afd-be74-fc997a4bfb28\") " pod="kube-system/coredns-7c65d6cfc9-xjlbk" Sep 12 17:13:00.931922 kubelet[3597]: I0912 17:13:00.931917 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkz2v\" (UniqueName: \"kubernetes.io/projected/4501662a-11ff-4afd-be74-fc997a4bfb28-kube-api-access-fkz2v\") pod \"coredns-7c65d6cfc9-xjlbk\" (UID: \"4501662a-11ff-4afd-be74-fc997a4bfb28\") " pod="kube-system/coredns-7c65d6cfc9-xjlbk" Sep 12 17:13:00.932209 kubelet[3597]: I0912 17:13:00.931954 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6f5\" (UniqueName: \"kubernetes.io/projected/49ae6f41-dad9-4729-9295-108d09c0b438-kube-api-access-6f6f5\") pod \"coredns-7c65d6cfc9-vp4pp\" (UID: \"49ae6f41-dad9-4729-9295-108d09c0b438\") " pod="kube-system/coredns-7c65d6cfc9-vp4pp" Sep 12 17:13:01.148848 containerd[2157]: time="2025-09-12T17:13:01.148674482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vp4pp,Uid:49ae6f41-dad9-4729-9295-108d09c0b438,Namespace:kube-system,Attempt:0,}" Sep 12 17:13:01.152058 containerd[2157]: time="2025-09-12T17:13:01.151991426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xjlbk,Uid:4501662a-11ff-4afd-be74-fc997a4bfb28,Namespace:kube-system,Attempt:0,}" Sep 12 17:13:03.647505 systemd-networkd[1702]: cilium_host: Link UP Sep 12 17:13:03.647757 (udev-worker)[4375]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:03.647862 systemd-networkd[1702]: cilium_net: Link UP Sep 12 17:13:03.649805 systemd-networkd[1702]: cilium_net: Gained carrier Sep 12 17:13:03.651898 systemd-networkd[1702]: cilium_host: Gained carrier Sep 12 17:13:03.653758 (udev-worker)[4410]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:03.874764 (udev-worker)[4423]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:03.884142 systemd-networkd[1702]: cilium_vxlan: Link UP Sep 12 17:13:03.884673 systemd-networkd[1702]: cilium_vxlan: Gained carrier Sep 12 17:13:03.900296 systemd[1]: run-containerd-runc-k8s.io-f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7-runc.A6pqIg.mount: Deactivated successfully. Sep 12 17:13:03.958226 systemd-networkd[1702]: cilium_net: Gained IPv6LL Sep 12 17:13:04.049041 kubelet[3597]: E0912 17:13:04.047243 3597 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53936->127.0.0.1:36839: write tcp 127.0.0.1:53936->127.0.0.1:36839: write: broken pipe Sep 12 17:13:04.270208 systemd-networkd[1702]: cilium_host: Gained IPv6LL Sep 12 17:13:04.481027 kernel: NET: Registered PF_ALG protocol family Sep 12 17:13:05.486326 systemd-networkd[1702]: cilium_vxlan: Gained IPv6LL Sep 12 17:13:05.825899 systemd-networkd[1702]: lxc_health: Link UP Sep 12 17:13:05.834474 systemd-networkd[1702]: lxc_health: Gained carrier Sep 12 17:13:06.267310 kubelet[3597]: E0912 17:13:06.267260 3597 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53946->127.0.0.1:36839: write tcp 127.0.0.1:53946->127.0.0.1:36839: write: broken pipe Sep 12 17:13:06.337606 systemd-networkd[1702]: lxc02ec8715aac6: Link UP Sep 12 17:13:06.353173 kernel: eth0: renamed from tmp98bdf Sep 12 17:13:06.357826 systemd-networkd[1702]: lxc02ec8715aac6: Gained carrier Sep 12 17:13:06.381033 systemd-networkd[1702]: lxcfd6459989abe: Link UP Sep 12 17:13:06.389152 kernel: eth0: renamed from tmp35f14 Sep 12 17:13:06.392957 (udev-worker)[4421]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:13:06.395129 systemd-networkd[1702]: lxcfd6459989abe: Gained carrier Sep 12 17:13:07.278241 systemd-networkd[1702]: lxc_health: Gained IPv6LL Sep 12 17:13:07.369017 kubelet[3597]: I0912 17:13:07.367421 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n29x8" podStartSLOduration=13.548202649 podStartE2EDuration="21.36739028s" podCreationTimestamp="2025-09-12 17:12:46 +0000 UTC" firstStartedPulling="2025-09-12 17:12:47.570577082 +0000 UTC m=+4.808930065" lastFinishedPulling="2025-09-12 17:12:55.389764713 +0000 UTC m=+12.628117696" observedRunningTime="2025-09-12 17:13:01.352115091 +0000 UTC m=+18.590468170" watchObservedRunningTime="2025-09-12 17:13:07.36739028 +0000 UTC m=+24.605743275" Sep 12 17:13:07.662610 systemd-networkd[1702]: lxc02ec8715aac6: Gained IPv6LL Sep 12 17:13:07.792152 systemd-networkd[1702]: lxcfd6459989abe: Gained IPv6LL Sep 12 17:13:09.945589 ntpd[2103]: Listen normally on 6 cilium_host 192.168.0.35:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 6 cilium_host 192.168.0.35:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 7 cilium_net [fe80::9813:c7ff:fe8a:a34c%4]:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 8 cilium_host [fe80::c0e:adff:fe57:b371%5]:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 9 cilium_vxlan [fe80::44e3:9fff:fefd:aff0%6]:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 10 lxc_health [fe80::10bb:6ff:fe1e:fae9%8]:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 11 lxc02ec8715aac6 [fe80::7831:a9ff:fe25:239c%10]:123 Sep 12 17:13:09.946885 ntpd[2103]: 12 Sep 17:13:09 ntpd[2103]: Listen normally on 12 lxcfd6459989abe [fe80::843d:e2ff:fef0:ceda%12]:123 Sep 12 17:13:09.945721 ntpd[2103]: Listen normally on 7 cilium_net [fe80::9813:c7ff:fe8a:a34c%4]:123 Sep 12 17:13:09.945802 ntpd[2103]: Listen normally on 8 cilium_host [fe80::c0e:adff:fe57:b371%5]:123 Sep 12 17:13:09.945870 ntpd[2103]: Listen normally on 9 cilium_vxlan [fe80::44e3:9fff:fefd:aff0%6]:123 Sep 12 17:13:09.945946 ntpd[2103]: Listen normally on 10 lxc_health [fe80::10bb:6ff:fe1e:fae9%8]:123 Sep 12 17:13:09.946065 ntpd[2103]: Listen normally on 11 lxc02ec8715aac6 [fe80::7831:a9ff:fe25:239c%10]:123 Sep 12 17:13:09.946137 ntpd[2103]: Listen normally on 12 lxcfd6459989abe [fe80::843d:e2ff:fef0:ceda%12]:123 Sep 12 17:13:10.821346 kubelet[3597]: E0912 17:13:10.821164 3597 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58162->127.0.0.1:36839: write tcp 127.0.0.1:58162->127.0.0.1:36839: write: broken pipe Sep 12 17:13:11.504663 kubelet[3597]: I0912 17:13:11.504597 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:13:14.082548 sudo[2504]: pam_unix(sudo:session): session closed for user root Sep 12 17:13:14.106833 sshd[2500]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:14.120765 systemd[1]: sshd@6-172.31.18.149:22-147.75.109.163:58132.service: Deactivated successfully. Sep 12 17:13:14.121071 systemd-logind[2128]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:13:14.133227 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:13:14.137854 systemd-logind[2128]: Removed session 7. Sep 12 17:13:15.371021 containerd[2157]: time="2025-09-12T17:13:15.369373888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:15.371021 containerd[2157]: time="2025-09-12T17:13:15.369572644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:15.371021 containerd[2157]: time="2025-09-12T17:13:15.369640348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:15.371021 containerd[2157]: time="2025-09-12T17:13:15.369950752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:15.391037 containerd[2157]: time="2025-09-12T17:13:15.385686964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:13:15.391037 containerd[2157]: time="2025-09-12T17:13:15.386398024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:13:15.391037 containerd[2157]: time="2025-09-12T17:13:15.386943928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:15.391307 containerd[2157]: time="2025-09-12T17:13:15.390694096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:13:15.605066 containerd[2157]: time="2025-09-12T17:13:15.604197809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vp4pp,Uid:49ae6f41-dad9-4729-9295-108d09c0b438,Namespace:kube-system,Attempt:0,} returns sandbox id \"35f1434a4a2ebb9476860f7360fa231c683a9bacdca1004d5e1d8dd91ea8b568\"" Sep 12 17:13:15.626910 containerd[2157]: time="2025-09-12T17:13:15.626124377Z" level=info msg="CreateContainer within sandbox \"35f1434a4a2ebb9476860f7360fa231c683a9bacdca1004d5e1d8dd91ea8b568\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:13:15.672396 containerd[2157]: time="2025-09-12T17:13:15.672330966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xjlbk,Uid:4501662a-11ff-4afd-be74-fc997a4bfb28,Namespace:kube-system,Attempt:0,} returns sandbox id \"98bdfdd35c4c75e79561c2c6332cb5789149c17261b0f85ab5e6e42c107fe520\"" Sep 12 17:13:15.687529 containerd[2157]: time="2025-09-12T17:13:15.687443394Z" level=info msg="CreateContainer within sandbox \"98bdfdd35c4c75e79561c2c6332cb5789149c17261b0f85ab5e6e42c107fe520\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:13:15.691332 containerd[2157]: time="2025-09-12T17:13:15.691034478Z" level=info msg="CreateContainer within sandbox \"35f1434a4a2ebb9476860f7360fa231c683a9bacdca1004d5e1d8dd91ea8b568\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76f61dcc96ef149f5b9e91d340affac44e90f368a2179d42a018bb0c9a9abf57\"" Sep 12 17:13:15.696015 containerd[2157]: time="2025-09-12T17:13:15.694700946Z" level=info msg="StartContainer for \"76f61dcc96ef149f5b9e91d340affac44e90f368a2179d42a018bb0c9a9abf57\"" Sep 12 17:13:15.733678 containerd[2157]: time="2025-09-12T17:13:15.733610862Z" level=info msg="CreateContainer within sandbox \"98bdfdd35c4c75e79561c2c6332cb5789149c17261b0f85ab5e6e42c107fe520\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d399c4713412934638a7dbd2840b95ff0ccd2cc1f0fb052d6e4ba893b819a56\"" Sep 12 17:13:15.739713 containerd[2157]: time="2025-09-12T17:13:15.739528722Z" level=info msg="StartContainer for \"6d399c4713412934638a7dbd2840b95ff0ccd2cc1f0fb052d6e4ba893b819a56\"" Sep 12 17:13:15.877712 containerd[2157]: time="2025-09-12T17:13:15.877516987Z" level=info msg="StartContainer for \"76f61dcc96ef149f5b9e91d340affac44e90f368a2179d42a018bb0c9a9abf57\" returns successfully" Sep 12 17:13:15.915869 containerd[2157]: time="2025-09-12T17:13:15.915800107Z" level=info msg="StartContainer for \"6d399c4713412934638a7dbd2840b95ff0ccd2cc1f0fb052d6e4ba893b819a56\" returns successfully" Sep 12 17:13:16.427869 kubelet[3597]: I0912 17:13:16.427522 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xjlbk" podStartSLOduration=29.427497461 podStartE2EDuration="29.427497461s" podCreationTimestamp="2025-09-12 17:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:13:16.423525725 +0000 UTC m=+33.661878720" watchObservedRunningTime="2025-09-12 17:13:16.427497461 +0000 UTC m=+33.665850444" Sep 12 17:13:16.427869 kubelet[3597]: I0912 17:13:16.427697 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vp4pp" podStartSLOduration=29.427686041 podStartE2EDuration="29.427686041s" podCreationTimestamp="2025-09-12 17:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:13:16.387521117 +0000 UTC m=+33.625874112" watchObservedRunningTime="2025-09-12 17:13:16.427686041 +0000 UTC m=+33.666039048" Sep 12 17:13:49.475504 systemd[1]: Started sshd@7-172.31.18.149:22-147.75.109.163:33500.service - OpenSSH per-connection server daemon (147.75.109.163:33500). Sep 12 17:13:49.654253 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 33500 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:49.658607 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:49.668345 systemd-logind[2128]: New session 8 of user core. Sep 12 17:13:49.677522 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:13:49.953360 sshd[5082]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:49.960677 systemd[1]: sshd@7-172.31.18.149:22-147.75.109.163:33500.service: Deactivated successfully. Sep 12 17:13:49.961750 systemd-logind[2128]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:13:49.970757 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:13:49.978243 systemd-logind[2128]: Removed session 8. Sep 12 17:13:54.991603 systemd[1]: Started sshd@8-172.31.18.149:22-147.75.109.163:36540.service - OpenSSH per-connection server daemon (147.75.109.163:36540). Sep 12 17:13:55.158344 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 36540 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:13:55.161095 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:13:55.170012 systemd-logind[2128]: New session 9 of user core. Sep 12 17:13:55.178592 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:13:55.427902 sshd[5097]: pam_unix(sshd:session): session closed for user core Sep 12 17:13:55.436291 systemd[1]: sshd@8-172.31.18.149:22-147.75.109.163:36540.service: Deactivated successfully. Sep 12 17:13:55.443119 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:13:55.446240 systemd-logind[2128]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:13:55.448308 systemd-logind[2128]: Removed session 9. Sep 12 17:14:00.460510 systemd[1]: Started sshd@9-172.31.18.149:22-147.75.109.163:43784.service - OpenSSH per-connection server daemon (147.75.109.163:43784). Sep 12 17:14:00.644141 sshd[5113]: Accepted publickey for core from 147.75.109.163 port 43784 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:00.647544 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:00.658225 systemd-logind[2128]: New session 10 of user core. Sep 12 17:14:00.669306 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:14:00.916371 sshd[5113]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:00.925402 systemd[1]: sshd@9-172.31.18.149:22-147.75.109.163:43784.service: Deactivated successfully. Sep 12 17:14:00.932728 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:14:00.934605 systemd-logind[2128]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:14:00.937283 systemd-logind[2128]: Removed session 10. Sep 12 17:14:05.947539 systemd[1]: Started sshd@10-172.31.18.149:22-147.75.109.163:43794.service - OpenSSH per-connection server daemon (147.75.109.163:43794). Sep 12 17:14:06.129781 sshd[5127]: Accepted publickey for core from 147.75.109.163 port 43794 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:06.132519 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:06.141469 systemd-logind[2128]: New session 11 of user core. Sep 12 17:14:06.152858 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:14:06.407007 sshd[5127]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:06.414302 systemd[1]: sshd@10-172.31.18.149:22-147.75.109.163:43794.service: Deactivated successfully. Sep 12 17:14:06.422542 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:14:06.424272 systemd-logind[2128]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:14:06.427077 systemd-logind[2128]: Removed session 11. Sep 12 17:14:11.436434 systemd[1]: Started sshd@11-172.31.18.149:22-147.75.109.163:33308.service - OpenSSH per-connection server daemon (147.75.109.163:33308). Sep 12 17:14:11.620889 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 33308 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:11.624558 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:11.633250 systemd-logind[2128]: New session 12 of user core. Sep 12 17:14:11.645623 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:14:11.883194 sshd[5142]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:11.891471 systemd[1]: sshd@11-172.31.18.149:22-147.75.109.163:33308.service: Deactivated successfully. Sep 12 17:14:11.899123 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:14:11.901315 systemd-logind[2128]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:14:11.903327 systemd-logind[2128]: Removed session 12. Sep 12 17:14:11.914528 systemd[1]: Started sshd@12-172.31.18.149:22-147.75.109.163:33310.service - OpenSSH per-connection server daemon (147.75.109.163:33310). Sep 12 17:14:12.091694 sshd[5157]: Accepted publickey for core from 147.75.109.163 port 33310 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:12.093530 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:12.101121 systemd-logind[2128]: New session 13 of user core. Sep 12 17:14:12.107465 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:14:12.424667 sshd[5157]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:12.441926 systemd[1]: sshd@12-172.31.18.149:22-147.75.109.163:33310.service: Deactivated successfully. Sep 12 17:14:12.456550 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:14:12.465655 systemd-logind[2128]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:14:12.479469 systemd[1]: Started sshd@13-172.31.18.149:22-147.75.109.163:33322.service - OpenSSH per-connection server daemon (147.75.109.163:33322). Sep 12 17:14:12.480704 systemd-logind[2128]: Removed session 13. Sep 12 17:14:12.657878 sshd[5168]: Accepted publickey for core from 147.75.109.163 port 33322 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:12.660562 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:12.669716 systemd-logind[2128]: New session 14 of user core. Sep 12 17:14:12.679537 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:14:12.921370 sshd[5168]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:12.929627 systemd[1]: sshd@13-172.31.18.149:22-147.75.109.163:33322.service: Deactivated successfully. Sep 12 17:14:12.935382 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:14:12.937400 systemd-logind[2128]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:14:12.941107 systemd-logind[2128]: Removed session 14. Sep 12 17:14:17.959626 systemd[1]: Started sshd@14-172.31.18.149:22-147.75.109.163:33330.service - OpenSSH per-connection server daemon (147.75.109.163:33330). Sep 12 17:14:18.127040 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 33330 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:18.129627 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:18.137414 systemd-logind[2128]: New session 15 of user core. Sep 12 17:14:18.148519 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:14:18.390366 sshd[5182]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:18.398332 systemd-logind[2128]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:14:18.399405 systemd[1]: sshd@14-172.31.18.149:22-147.75.109.163:33330.service: Deactivated successfully. Sep 12 17:14:18.406148 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:14:18.408607 systemd-logind[2128]: Removed session 15. Sep 12 17:14:23.421446 systemd[1]: Started sshd@15-172.31.18.149:22-147.75.109.163:36138.service - OpenSSH per-connection server daemon (147.75.109.163:36138). Sep 12 17:14:23.596256 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 36138 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:23.598705 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:23.606601 systemd-logind[2128]: New session 16 of user core. Sep 12 17:14:23.617483 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:14:23.893837 sshd[5198]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:23.902445 systemd[1]: sshd@15-172.31.18.149:22-147.75.109.163:36138.service: Deactivated successfully. Sep 12 17:14:23.908017 systemd-logind[2128]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:14:23.909026 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:14:23.912179 systemd-logind[2128]: Removed session 16. Sep 12 17:14:28.929469 systemd[1]: Started sshd@16-172.31.18.149:22-147.75.109.163:36154.service - OpenSSH per-connection server daemon (147.75.109.163:36154). Sep 12 17:14:29.103583 sshd[5212]: Accepted publickey for core from 147.75.109.163 port 36154 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:29.106225 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:29.114087 systemd-logind[2128]: New session 17 of user core. Sep 12 17:14:29.121443 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:14:29.363840 sshd[5212]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:29.373347 systemd[1]: sshd@16-172.31.18.149:22-147.75.109.163:36154.service: Deactivated successfully. Sep 12 17:14:29.375367 systemd-logind[2128]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:14:29.380741 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:14:29.383069 systemd-logind[2128]: Removed session 17. Sep 12 17:14:29.394501 systemd[1]: Started sshd@17-172.31.18.149:22-147.75.109.163:36166.service - OpenSSH per-connection server daemon (147.75.109.163:36166). Sep 12 17:14:29.574635 sshd[5226]: Accepted publickey for core from 147.75.109.163 port 36166 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:29.577442 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:29.585896 systemd-logind[2128]: New session 18 of user core. Sep 12 17:14:29.592698 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:14:29.937390 sshd[5226]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:29.944767 systemd[1]: sshd@17-172.31.18.149:22-147.75.109.163:36166.service: Deactivated successfully. Sep 12 17:14:29.953177 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:14:29.955602 systemd-logind[2128]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:14:29.964369 systemd-logind[2128]: Removed session 18. Sep 12 17:14:29.972517 systemd[1]: Started sshd@18-172.31.18.149:22-147.75.109.163:42562.service - OpenSSH per-connection server daemon (147.75.109.163:42562). Sep 12 17:14:30.149692 sshd[5238]: Accepted publickey for core from 147.75.109.163 port 42562 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:30.152485 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:30.161711 systemd-logind[2128]: New session 19 of user core. Sep 12 17:14:30.171572 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:14:32.667618 sshd[5238]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:32.688698 systemd[1]: sshd@18-172.31.18.149:22-147.75.109.163:42562.service: Deactivated successfully. Sep 12 17:14:32.689937 systemd-logind[2128]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:14:32.710126 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:14:32.721531 systemd[1]: Started sshd@19-172.31.18.149:22-147.75.109.163:42570.service - OpenSSH per-connection server daemon (147.75.109.163:42570). Sep 12 17:14:32.727843 systemd-logind[2128]: Removed session 19. Sep 12 17:14:32.903917 sshd[5257]: Accepted publickey for core from 147.75.109.163 port 42570 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:32.906732 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:32.916722 systemd-logind[2128]: New session 20 of user core. Sep 12 17:14:32.920984 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:14:33.432404 sshd[5257]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:33.448423 systemd-logind[2128]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:14:33.450206 systemd[1]: sshd@19-172.31.18.149:22-147.75.109.163:42570.service: Deactivated successfully. Sep 12 17:14:33.458262 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:14:33.463868 systemd-logind[2128]: Removed session 20. Sep 12 17:14:33.469477 systemd[1]: Started sshd@20-172.31.18.149:22-147.75.109.163:42582.service - OpenSSH per-connection server daemon (147.75.109.163:42582). Sep 12 17:14:33.653445 sshd[5269]: Accepted publickey for core from 147.75.109.163 port 42582 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:33.656955 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:33.666437 systemd-logind[2128]: New session 21 of user core. Sep 12 17:14:33.678700 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:14:33.956320 sshd[5269]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:33.963500 systemd[1]: sshd@20-172.31.18.149:22-147.75.109.163:42582.service: Deactivated successfully. Sep 12 17:14:33.971884 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:14:33.973363 systemd-logind[2128]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:14:33.975822 systemd-logind[2128]: Removed session 21. Sep 12 17:14:38.986461 systemd[1]: Started sshd@21-172.31.18.149:22-147.75.109.163:42586.service - OpenSSH per-connection server daemon (147.75.109.163:42586). Sep 12 17:14:39.168689 sshd[5282]: Accepted publickey for core from 147.75.109.163 port 42586 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:39.171908 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:39.180416 systemd-logind[2128]: New session 22 of user core. Sep 12 17:14:39.186525 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:14:39.433074 sshd[5282]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:39.441057 systemd[1]: sshd@21-172.31.18.149:22-147.75.109.163:42586.service: Deactivated successfully. Sep 12 17:14:39.441882 systemd-logind[2128]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:14:39.447992 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:14:39.450269 systemd-logind[2128]: Removed session 22. Sep 12 17:14:44.463660 systemd[1]: Started sshd@22-172.31.18.149:22-147.75.109.163:33278.service - OpenSSH per-connection server daemon (147.75.109.163:33278). Sep 12 17:14:44.638532 sshd[5301]: Accepted publickey for core from 147.75.109.163 port 33278 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:44.641290 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:44.650088 systemd-logind[2128]: New session 23 of user core. Sep 12 17:14:44.664513 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:14:44.899324 sshd[5301]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:44.907407 systemd[1]: sshd@22-172.31.18.149:22-147.75.109.163:33278.service: Deactivated successfully. Sep 12 17:14:44.913721 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:14:44.915716 systemd-logind[2128]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:14:44.918222 systemd-logind[2128]: Removed session 23. Sep 12 17:14:49.931648 systemd[1]: Started sshd@23-172.31.18.149:22-147.75.109.163:33292.service - OpenSSH per-connection server daemon (147.75.109.163:33292). Sep 12 17:14:50.099089 sshd[5317]: Accepted publickey for core from 147.75.109.163 port 33292 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:50.101680 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:50.110490 systemd-logind[2128]: New session 24 of user core. Sep 12 17:14:50.117591 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:14:50.352818 sshd[5317]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:50.361998 systemd-logind[2128]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:14:50.363081 systemd[1]: sshd@23-172.31.18.149:22-147.75.109.163:33292.service: Deactivated successfully. Sep 12 17:14:50.368811 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:14:50.372141 systemd-logind[2128]: Removed session 24. Sep 12 17:14:55.386457 systemd[1]: Started sshd@24-172.31.18.149:22-147.75.109.163:50944.service - OpenSSH per-connection server daemon (147.75.109.163:50944). Sep 12 17:14:55.563894 sshd[5330]: Accepted publickey for core from 147.75.109.163 port 50944 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:55.566548 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:55.574445 systemd-logind[2128]: New session 25 of user core. Sep 12 17:14:55.581598 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:14:55.827367 sshd[5330]: pam_unix(sshd:session): session closed for user core Sep 12 17:14:55.837430 systemd[1]: sshd@24-172.31.18.149:22-147.75.109.163:50944.service: Deactivated successfully. Sep 12 17:14:55.843510 systemd-logind[2128]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:14:55.845583 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:14:55.851815 systemd-logind[2128]: Removed session 25. Sep 12 17:14:55.859738 systemd[1]: Started sshd@25-172.31.18.149:22-147.75.109.163:50954.service - OpenSSH per-connection server daemon (147.75.109.163:50954). Sep 12 17:14:56.037893 sshd[5344]: Accepted publickey for core from 147.75.109.163 port 50954 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:14:56.040566 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:14:56.049071 systemd-logind[2128]: New session 26 of user core. Sep 12 17:14:56.058533 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:14:59.073270 containerd[2157]: time="2025-09-12T17:14:59.073024951Z" level=info msg="StopContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" with timeout 30 (s)" Sep 12 17:14:59.080750 containerd[2157]: time="2025-09-12T17:14:59.077075551Z" level=info msg="Stop container \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" with signal terminated" Sep 12 17:14:59.140831 containerd[2157]: time="2025-09-12T17:14:59.140745824Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:14:59.158476 containerd[2157]: time="2025-09-12T17:14:59.158301536Z" level=info msg="StopContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" with timeout 2 (s)" Sep 12 17:14:59.163249 containerd[2157]: time="2025-09-12T17:14:59.163141952Z" level=info msg="Stop container \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" with signal terminated" Sep 12 17:14:59.168932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f-rootfs.mount: Deactivated successfully. Sep 12 17:14:59.181552 containerd[2157]: time="2025-09-12T17:14:59.181347812Z" level=info msg="shim disconnected" id=ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f namespace=k8s.io Sep 12 17:14:59.181814 containerd[2157]: time="2025-09-12T17:14:59.181671776Z" level=warning msg="cleaning up after shim disconnected" id=ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f namespace=k8s.io Sep 12 17:14:59.181814 containerd[2157]: time="2025-09-12T17:14:59.181699016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:59.192542 systemd-networkd[1702]: lxc_health: Link DOWN Sep 12 17:14:59.192562 systemd-networkd[1702]: lxc_health: Lost carrier Sep 12 17:14:59.238721 containerd[2157]: time="2025-09-12T17:14:59.238342316Z" level=info msg="StopContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" returns successfully" Sep 12 17:14:59.240095 containerd[2157]: time="2025-09-12T17:14:59.239239616Z" level=info msg="StopPodSandbox for \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\"" Sep 12 17:14:59.240095 containerd[2157]: time="2025-09-12T17:14:59.239311196Z" level=info msg="Container to stop \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.245032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e-shm.mount: Deactivated successfully. Sep 12 17:14:59.271324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7-rootfs.mount: Deactivated successfully. Sep 12 17:14:59.281660 containerd[2157]: time="2025-09-12T17:14:59.281577092Z" level=info msg="shim disconnected" id=f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7 namespace=k8s.io Sep 12 17:14:59.281660 containerd[2157]: time="2025-09-12T17:14:59.281656148Z" level=warning msg="cleaning up after shim disconnected" id=f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7 namespace=k8s.io Sep 12 17:14:59.281660 containerd[2157]: time="2025-09-12T17:14:59.281679404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:59.322415 containerd[2157]: time="2025-09-12T17:14:59.320243697Z" level=info msg="StopContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" returns successfully" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323275593Z" level=info msg="StopPodSandbox for \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\"" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323341401Z" level=info msg="Container to stop \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323367489Z" level=info msg="Container to stop \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323396109Z" level=info msg="Container to stop \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323680953Z" level=info msg="Container to stop \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.325115 containerd[2157]: time="2025-09-12T17:14:59.323715009Z" level=info msg="Container to stop \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:14:59.324734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e-rootfs.mount: Deactivated successfully. Sep 12 17:14:59.334054 containerd[2157]: time="2025-09-12T17:14:59.332789685Z" level=info msg="shim disconnected" id=db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e namespace=k8s.io Sep 12 17:14:59.334054 containerd[2157]: time="2025-09-12T17:14:59.332872089Z" level=warning msg="cleaning up after shim disconnected" id=db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e namespace=k8s.io Sep 12 17:14:59.334054 containerd[2157]: time="2025-09-12T17:14:59.332893233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:59.378285 containerd[2157]: time="2025-09-12T17:14:59.378193305Z" level=info msg="TearDown network for sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" successfully" Sep 12 17:14:59.378285 containerd[2157]: time="2025-09-12T17:14:59.378258117Z" level=info msg="StopPodSandbox for \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" returns successfully" Sep 12 17:14:59.408492 containerd[2157]: time="2025-09-12T17:14:59.406470309Z" level=info msg="shim disconnected" id=6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b namespace=k8s.io Sep 12 17:14:59.408492 containerd[2157]: time="2025-09-12T17:14:59.406560957Z" level=warning msg="cleaning up after shim disconnected" id=6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b namespace=k8s.io Sep 12 17:14:59.408492 containerd[2157]: time="2025-09-12T17:14:59.406595109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:14:59.429394 containerd[2157]: time="2025-09-12T17:14:59.429325581Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:14:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:14:59.431513 containerd[2157]: time="2025-09-12T17:14:59.431458809Z" level=info msg="TearDown network for sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" successfully" Sep 12 17:14:59.431730 containerd[2157]: time="2025-09-12T17:14:59.431509461Z" level=info msg="StopPodSandbox for \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" returns successfully" Sep 12 17:14:59.438168 kubelet[3597]: I0912 17:14:59.438107 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kblt\" (UniqueName: \"kubernetes.io/projected/ae8fc710-083c-4afe-80d5-9141f3c31bc0-kube-api-access-2kblt\") pod \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\" (UID: \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\") " Sep 12 17:14:59.438809 kubelet[3597]: I0912 17:14:59.438175 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae8fc710-083c-4afe-80d5-9141f3c31bc0-cilium-config-path\") pod \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\" (UID: \"ae8fc710-083c-4afe-80d5-9141f3c31bc0\") " Sep 12 17:14:59.453793 kubelet[3597]: I0912 17:14:59.452806 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8fc710-083c-4afe-80d5-9141f3c31bc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae8fc710-083c-4afe-80d5-9141f3c31bc0" (UID: "ae8fc710-083c-4afe-80d5-9141f3c31bc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:14:59.453793 kubelet[3597]: I0912 17:14:59.453721 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8fc710-083c-4afe-80d5-9141f3c31bc0-kube-api-access-2kblt" (OuterVolumeSpecName: "kube-api-access-2kblt") pod "ae8fc710-083c-4afe-80d5-9141f3c31bc0" (UID: "ae8fc710-083c-4afe-80d5-9141f3c31bc0"). InnerVolumeSpecName "kube-api-access-2kblt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539326 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9rqr\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-kube-api-access-z9rqr\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539386 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-lib-modules\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539425 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-net\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539457 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-run\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539496 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hubble-tls\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.540999 kubelet[3597]: I0912 17:14:59.539529 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hostproc\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539565 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-kernel\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539599 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cni-path\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539634 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-xtables-lock\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539672 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-config-path\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539708 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-bpf-maps\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541433 kubelet[3597]: I0912 17:14:59.539738 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-cgroup\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.539776 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-clustermesh-secrets\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.539807 3597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-etc-cni-netd\") pod \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\" (UID: \"55f3bbb9-6a08-44ff-a504-27ba8b1d382e\") " Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.539867 3597 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kblt\" (UniqueName: \"kubernetes.io/projected/ae8fc710-083c-4afe-80d5-9141f3c31bc0-kube-api-access-2kblt\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.539894 3597 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae8fc710-083c-4afe-80d5-9141f3c31bc0-cilium-config-path\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.539964 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.541737 kubelet[3597]: I0912 17:14:59.540055 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542105 kubelet[3597]: I0912 17:14:59.540068 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cni-path" (OuterVolumeSpecName: "cni-path") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542105 kubelet[3597]: I0912 17:14:59.540105 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542748 kubelet[3597]: I0912 17:14:59.542163 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542748 kubelet[3597]: I0912 17:14:59.542249 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542910 kubelet[3597]: I0912 17:14:59.542277 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542910 kubelet[3597]: I0912 17:14:59.542321 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.542910 kubelet[3597]: I0912 17:14:59.542853 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.544400 kubelet[3597]: I0912 17:14:59.544307 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hostproc" (OuterVolumeSpecName: "hostproc") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:14:59.549277 kubelet[3597]: I0912 17:14:59.549219 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-kube-api-access-z9rqr" (OuterVolumeSpecName: "kube-api-access-z9rqr") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "kube-api-access-z9rqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:14:59.550550 kubelet[3597]: I0912 17:14:59.550496 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:14:59.552444 kubelet[3597]: I0912 17:14:59.552395 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:14:59.552678 kubelet[3597]: I0912 17:14:59.552580 3597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "55f3bbb9-6a08-44ff-a504-27ba8b1d382e" (UID: "55f3bbb9-6a08-44ff-a504-27ba8b1d382e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:14:59.640415 kubelet[3597]: I0912 17:14:59.640276 3597 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-run\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.640564 kubelet[3597]: I0912 17:14:59.640538 3597 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hubble-tls\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.642928 kubelet[3597]: I0912 17:14:59.640569 3597 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-net\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643064 kubelet[3597]: I0912 17:14:59.642939 3597 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-hostproc\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643064 kubelet[3597]: I0912 17:14:59.643018 3597 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-host-proc-sys-kernel\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643064 kubelet[3597]: I0912 17:14:59.643044 3597 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cni-path\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643256 kubelet[3597]: I0912 17:14:59.643088 3597 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-xtables-lock\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643256 kubelet[3597]: I0912 17:14:59.643115 3597 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-config-path\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643256 kubelet[3597]: I0912 17:14:59.643184 3597 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-bpf-maps\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643256 kubelet[3597]: I0912 17:14:59.643208 3597 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-cilium-cgroup\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643256 kubelet[3597]: I0912 17:14:59.643233 3597 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-clustermesh-secrets\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643511 kubelet[3597]: I0912 17:14:59.643281 3597 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-etc-cni-netd\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643511 kubelet[3597]: I0912 17:14:59.643303 3597 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-lib-modules\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.643511 kubelet[3597]: I0912 17:14:59.643327 3597 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9rqr\" (UniqueName: \"kubernetes.io/projected/55f3bbb9-6a08-44ff-a504-27ba8b1d382e-kube-api-access-z9rqr\") on node \"ip-172-31-18-149\" DevicePath \"\"" Sep 12 17:14:59.645769 kubelet[3597]: I0912 17:14:59.645709 3597 scope.go:117] "RemoveContainer" containerID="ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f" Sep 12 17:14:59.654282 containerd[2157]: time="2025-09-12T17:14:59.653024086Z" level=info msg="RemoveContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\"" Sep 12 17:14:59.675616 containerd[2157]: time="2025-09-12T17:14:59.675534670Z" level=info msg="RemoveContainer for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" returns successfully" Sep 12 17:14:59.676957 kubelet[3597]: I0912 17:14:59.676912 3597 scope.go:117] "RemoveContainer" containerID="ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f" Sep 12 17:14:59.678573 kubelet[3597]: E0912 17:14:59.677894 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\": not found" containerID="ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f" Sep 12 17:14:59.678703 containerd[2157]: time="2025-09-12T17:14:59.677384818Z" level=error msg="ContainerStatus for \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\": not found" Sep 12 17:14:59.678776 kubelet[3597]: I0912 17:14:59.678318 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f"} err="failed to get container status \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed493dfb9229d97b6054a1901239b6703099e88e02a1a348d36cf73cd5fd7a9f\": not found" Sep 12 17:14:59.678776 kubelet[3597]: I0912 17:14:59.678703 3597 scope.go:117] "RemoveContainer" containerID="f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7" Sep 12 17:14:59.685873 containerd[2157]: time="2025-09-12T17:14:59.685415542Z" level=info msg="RemoveContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\"" Sep 12 17:14:59.695231 containerd[2157]: time="2025-09-12T17:14:59.693959410Z" level=info msg="RemoveContainer for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" returns successfully" Sep 12 17:14:59.695937 kubelet[3597]: I0912 17:14:59.695876 3597 scope.go:117] "RemoveContainer" containerID="a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801" Sep 12 17:14:59.707593 containerd[2157]: time="2025-09-12T17:14:59.706602406Z" level=info msg="RemoveContainer for \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\"" Sep 12 17:14:59.729662 containerd[2157]: time="2025-09-12T17:14:59.729574655Z" level=info msg="RemoveContainer for \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\" returns successfully" Sep 12 17:14:59.730216 kubelet[3597]: I0912 17:14:59.730172 3597 scope.go:117] "RemoveContainer" containerID="844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c" Sep 12 17:14:59.737916 containerd[2157]: time="2025-09-12T17:14:59.737254295Z" level=info msg="RemoveContainer for \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\"" Sep 12 17:14:59.743789 containerd[2157]: time="2025-09-12T17:14:59.743723135Z" level=info msg="RemoveContainer for \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\" returns successfully" Sep 12 17:14:59.744235 kubelet[3597]: I0912 17:14:59.744178 3597 scope.go:117] "RemoveContainer" containerID="440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e" Sep 12 17:14:59.746638 containerd[2157]: time="2025-09-12T17:14:59.746584379Z" level=info msg="RemoveContainer for \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\"" Sep 12 17:14:59.752879 containerd[2157]: time="2025-09-12T17:14:59.752801627Z" level=info msg="RemoveContainer for \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\" returns successfully" Sep 12 17:14:59.753371 kubelet[3597]: I0912 17:14:59.753270 3597 scope.go:117] "RemoveContainer" containerID="b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337" Sep 12 17:14:59.755895 containerd[2157]: time="2025-09-12T17:14:59.755780519Z" level=info msg="RemoveContainer for \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\"" Sep 12 17:14:59.762292 containerd[2157]: time="2025-09-12T17:14:59.762233615Z" level=info msg="RemoveContainer for \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\" returns successfully" Sep 12 17:14:59.762793 kubelet[3597]: I0912 17:14:59.762679 3597 scope.go:117] "RemoveContainer" containerID="f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7" Sep 12 17:14:59.763299 containerd[2157]: time="2025-09-12T17:14:59.763089023Z" level=error msg="ContainerStatus for \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\": not found" Sep 12 17:14:59.763426 kubelet[3597]: E0912 17:14:59.763355 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\": not found" containerID="f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7" Sep 12 17:14:59.763517 kubelet[3597]: I0912 17:14:59.763440 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7"} err="failed to get container status \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f90adbef03be4ff04401791e310400e82cfeccfb5f7a075a8b8b7b2bdaf424d7\": not found" Sep 12 17:14:59.763517 kubelet[3597]: I0912 17:14:59.763477 3597 scope.go:117] "RemoveContainer" containerID="a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801" Sep 12 17:14:59.763867 containerd[2157]: time="2025-09-12T17:14:59.763814639Z" level=error msg="ContainerStatus for \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\": not found" Sep 12 17:14:59.764239 kubelet[3597]: E0912 17:14:59.764200 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\": not found" containerID="a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801" Sep 12 17:14:59.764348 kubelet[3597]: I0912 17:14:59.764266 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801"} err="failed to get container status \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\": rpc error: code = NotFound desc = an error occurred when try to find container \"a428fb8c84921ce50a99da29602f2ae9b16889e4ac9d96a86f430ede1884f801\": not found" Sep 12 17:14:59.764348 kubelet[3597]: I0912 17:14:59.764311 3597 scope.go:117] "RemoveContainer" containerID="844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c" Sep 12 17:14:59.764714 containerd[2157]: time="2025-09-12T17:14:59.764656427Z" level=error msg="ContainerStatus for \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\": not found" Sep 12 17:14:59.764948 kubelet[3597]: E0912 17:14:59.764908 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\": not found" containerID="844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c" Sep 12 17:14:59.765061 kubelet[3597]: I0912 17:14:59.764958 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c"} err="failed to get container status \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"844c84a4d9fe20f45c675b323f8988805a7c8752f136e7333a375201f7c6cf9c\": not found" Sep 12 17:14:59.765061 kubelet[3597]: I0912 17:14:59.765025 3597 scope.go:117] "RemoveContainer" containerID="440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e" Sep 12 17:14:59.765493 containerd[2157]: time="2025-09-12T17:14:59.765438335Z" level=error msg="ContainerStatus for \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\": not found" Sep 12 17:14:59.765749 kubelet[3597]: E0912 17:14:59.765700 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\": not found" containerID="440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e" Sep 12 17:14:59.765816 kubelet[3597]: I0912 17:14:59.765748 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e"} err="failed to get container status \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\": rpc error: code = NotFound desc = an error occurred when try to find container \"440b8fd00209707fa8bb365bf8afc461ee39bbb93cbdfb58075e027c3d88202e\": not found" Sep 12 17:14:59.765816 kubelet[3597]: I0912 17:14:59.765784 3597 scope.go:117] "RemoveContainer" containerID="b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337" Sep 12 17:14:59.766184 containerd[2157]: time="2025-09-12T17:14:59.766128791Z" level=error msg="ContainerStatus for \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\": not found" Sep 12 17:14:59.766416 kubelet[3597]: E0912 17:14:59.766339 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\": not found" containerID="b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337" Sep 12 17:14:59.766416 kubelet[3597]: I0912 17:14:59.766378 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337"} err="failed to get container status \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\": rpc error: code = NotFound desc = an error occurred when try to find container \"b83239093be5b2c474d1c396f1a96d72f9f0207d976d6f18be2c60934e61b337\": not found" Sep 12 17:15:00.096701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b-rootfs.mount: Deactivated successfully. Sep 12 17:15:00.097728 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b-shm.mount: Deactivated successfully. Sep 12 17:15:00.098058 systemd[1]: var-lib-kubelet-pods-ae8fc710\x2d083c\x2d4afe\x2d80d5\x2d9141f3c31bc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2kblt.mount: Deactivated successfully. Sep 12 17:15:00.098335 systemd[1]: var-lib-kubelet-pods-55f3bbb9\x2d6a08\x2d44ff\x2da504\x2d27ba8b1d382e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9rqr.mount: Deactivated successfully. Sep 12 17:15:00.098722 systemd[1]: var-lib-kubelet-pods-55f3bbb9\x2d6a08\x2d44ff\x2da504\x2d27ba8b1d382e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:15:00.099102 systemd[1]: var-lib-kubelet-pods-55f3bbb9\x2d6a08\x2d44ff\x2da504\x2d27ba8b1d382e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:15:00.987710 sshd[5344]: pam_unix(sshd:session): session closed for user core Sep 12 17:15:00.993758 systemd-logind[2128]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:15:00.994629 systemd[1]: sshd@25-172.31.18.149:22-147.75.109.163:50954.service: Deactivated successfully. Sep 12 17:15:01.003088 kubelet[3597]: I0912 17:15:01.001372 3597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" path="/var/lib/kubelet/pods/55f3bbb9-6a08-44ff-a504-27ba8b1d382e/volumes" Sep 12 17:15:01.006893 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:15:01.009928 kubelet[3597]: I0912 17:15:01.008132 3597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8fc710-083c-4afe-80d5-9141f3c31bc0" path="/var/lib/kubelet/pods/ae8fc710-083c-4afe-80d5-9141f3c31bc0/volumes" Sep 12 17:15:01.015199 systemd-logind[2128]: Removed session 26. Sep 12 17:15:01.025598 systemd[1]: Started sshd@26-172.31.18.149:22-147.75.109.163:57140.service - OpenSSH per-connection server daemon (147.75.109.163:57140). Sep 12 17:15:01.191773 sshd[5509]: Accepted publickey for core from 147.75.109.163 port 57140 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:15:01.194517 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:15:01.209334 systemd-logind[2128]: New session 27 of user core. Sep 12 17:15:01.218002 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:15:01.945581 ntpd[2103]: Deleting interface #10 lxc_health, fe80::10bb:6ff:fe1e:fae9%8#123, interface stats: received=0, sent=0, dropped=0, active_time=112 secs Sep 12 17:15:01.948271 ntpd[2103]: 12 Sep 17:15:01 ntpd[2103]: Deleting interface #10 lxc_health, fe80::10bb:6ff:fe1e:fae9%8#123, interface stats: received=0, sent=0, dropped=0, active_time=112 secs Sep 12 17:15:02.509411 sshd[5509]: pam_unix(sshd:session): session closed for user core Sep 12 17:15:02.526726 systemd[1]: sshd@26-172.31.18.149:22-147.75.109.163:57140.service: Deactivated successfully. Sep 12 17:15:02.540194 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:15:02.554042 systemd-logind[2128]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:15:02.568945 systemd[1]: Started sshd@27-172.31.18.149:22-147.75.109.163:57156.service - OpenSSH per-connection server daemon (147.75.109.163:57156). Sep 12 17:15:02.578078 systemd-logind[2128]: Removed session 27. Sep 12 17:15:02.587652 kubelet[3597]: E0912 17:15:02.587584 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="clean-cilium-state" Sep 12 17:15:02.592327 kubelet[3597]: E0912 17:15:02.590156 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="cilium-agent" Sep 12 17:15:02.596009 kubelet[3597]: E0912 17:15:02.592859 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="mount-cgroup" Sep 12 17:15:02.596009 kubelet[3597]: E0912 17:15:02.592914 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="mount-bpf-fs" Sep 12 17:15:02.596009 kubelet[3597]: E0912 17:15:02.592933 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="apply-sysctl-overwrites" Sep 12 17:15:02.596009 kubelet[3597]: E0912 17:15:02.592949 3597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae8fc710-083c-4afe-80d5-9141f3c31bc0" containerName="cilium-operator" Sep 12 17:15:02.596009 kubelet[3597]: I0912 17:15:02.593064 3597 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f3bbb9-6a08-44ff-a504-27ba8b1d382e" containerName="cilium-agent" Sep 12 17:15:02.596009 kubelet[3597]: I0912 17:15:02.593108 3597 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8fc710-083c-4afe-80d5-9141f3c31bc0" containerName="cilium-operator" Sep 12 17:15:02.605039 kubelet[3597]: W0912 17:15:02.604663 3597 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-149" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-149' and this object Sep 12 17:15:02.620054 kubelet[3597]: E0912 17:15:02.611314 3597 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-18-149\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-149' and this object" logger="UnhandledError" Sep 12 17:15:02.669513 kubelet[3597]: I0912 17:15:02.669465 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-hostproc\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.669808 kubelet[3597]: I0912 17:15:02.669716 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-cilium-cgroup\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670192 kubelet[3597]: I0912 17:15:02.670089 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-bpf-maps\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670363 kubelet[3597]: I0912 17:15:02.670335 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-cilium-run\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670519 kubelet[3597]: I0912 17:15:02.670493 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73e741c2-1b57-4b95-95e5-30c3eefa7176-cilium-config-path\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670644 kubelet[3597]: I0912 17:15:02.670620 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73e741c2-1b57-4b95-95e5-30c3eefa7176-cilium-ipsec-secrets\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670769 kubelet[3597]: I0912 17:15:02.670746 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-host-proc-sys-kernel\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.670985 kubelet[3597]: I0912 17:15:02.670941 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-lib-modules\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671171 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73e741c2-1b57-4b95-95e5-30c3eefa7176-hubble-tls\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671331 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-host-proc-sys-net\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671391 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m56m\" (UniqueName: \"kubernetes.io/projected/73e741c2-1b57-4b95-95e5-30c3eefa7176-kube-api-access-6m56m\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671435 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73e741c2-1b57-4b95-95e5-30c3eefa7176-clustermesh-secrets\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671472 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-cni-path\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671640 kubelet[3597]: I0912 17:15:02.671509 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-etc-cni-netd\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.671955 kubelet[3597]: I0912 17:15:02.671550 3597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73e741c2-1b57-4b95-95e5-30c3eefa7176-xtables-lock\") pod \"cilium-z5fkh\" (UID: \"73e741c2-1b57-4b95-95e5-30c3eefa7176\") " pod="kube-system/cilium-z5fkh" Sep 12 17:15:02.825150 sshd[5523]: Accepted publickey for core from 147.75.109.163 port 57156 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:15:02.833825 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:15:02.845281 systemd-logind[2128]: New session 28 of user core. Sep 12 17:15:02.854511 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:15:02.975421 sshd[5523]: pam_unix(sshd:session): session closed for user core Sep 12 17:15:02.983317 systemd-logind[2128]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:15:02.984760 systemd[1]: sshd@27-172.31.18.149:22-147.75.109.163:57156.service: Deactivated successfully. Sep 12 17:15:02.989897 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:15:02.992278 systemd-logind[2128]: Removed session 28. Sep 12 17:15:03.006471 systemd[1]: Started sshd@28-172.31.18.149:22-147.75.109.163:57160.service - OpenSSH per-connection server daemon (147.75.109.163:57160). Sep 12 17:15:03.187413 sshd[5537]: Accepted publickey for core from 147.75.109.163 port 57160 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:15:03.190050 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:15:03.198626 systemd-logind[2128]: New session 29 of user core. Sep 12 17:15:03.205472 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:15:03.283316 kubelet[3597]: E0912 17:15:03.283249 3597 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:15:03.863471 containerd[2157]: time="2025-09-12T17:15:03.863348019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5fkh,Uid:73e741c2-1b57-4b95-95e5-30c3eefa7176,Namespace:kube-system,Attempt:0,}" Sep 12 17:15:03.912578 containerd[2157]: time="2025-09-12T17:15:03.912123675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:15:03.913186 containerd[2157]: time="2025-09-12T17:15:03.912443079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:15:03.913323 containerd[2157]: time="2025-09-12T17:15:03.913242063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:03.913537 containerd[2157]: time="2025-09-12T17:15:03.913447563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:15:03.981022 containerd[2157]: time="2025-09-12T17:15:03.980847628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z5fkh,Uid:73e741c2-1b57-4b95-95e5-30c3eefa7176,Namespace:kube-system,Attempt:0,} returns sandbox id \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\"" Sep 12 17:15:03.987488 containerd[2157]: time="2025-09-12T17:15:03.987431848Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:15:04.028886 containerd[2157]: time="2025-09-12T17:15:04.027428256Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14915f04f8b34eb8ed04089d02fde6d73113c09605dc7e62751d470f51a4dd8d\"" Sep 12 17:15:04.028886 containerd[2157]: time="2025-09-12T17:15:04.028494672Z" level=info msg="StartContainer for \"14915f04f8b34eb8ed04089d02fde6d73113c09605dc7e62751d470f51a4dd8d\"" Sep 12 17:15:04.028401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137326207.mount: Deactivated successfully. Sep 12 17:15:04.132615 containerd[2157]: time="2025-09-12T17:15:04.130957008Z" level=info msg="StartContainer for \"14915f04f8b34eb8ed04089d02fde6d73113c09605dc7e62751d470f51a4dd8d\" returns successfully" Sep 12 17:15:04.205344 containerd[2157]: time="2025-09-12T17:15:04.205262149Z" level=info msg="shim disconnected" id=14915f04f8b34eb8ed04089d02fde6d73113c09605dc7e62751d470f51a4dd8d namespace=k8s.io Sep 12 17:15:04.205608 containerd[2157]: time="2025-09-12T17:15:04.205343701Z" level=warning msg="cleaning up after shim disconnected" id=14915f04f8b34eb8ed04089d02fde6d73113c09605dc7e62751d470f51a4dd8d namespace=k8s.io Sep 12 17:15:04.205608 containerd[2157]: time="2025-09-12T17:15:04.205366705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:04.226792 containerd[2157]: time="2025-09-12T17:15:04.225697441Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:15:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:15:04.707011 containerd[2157]: time="2025-09-12T17:15:04.706533003Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:15:04.732100 containerd[2157]: time="2025-09-12T17:15:04.731354511Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"968b302ecbea4a477938369cffc4892c9435135d364ef7e02f57a5b0533e603f\"" Sep 12 17:15:04.733248 containerd[2157]: time="2025-09-12T17:15:04.733175139Z" level=info msg="StartContainer for \"968b302ecbea4a477938369cffc4892c9435135d364ef7e02f57a5b0533e603f\"" Sep 12 17:15:04.829744 containerd[2157]: time="2025-09-12T17:15:04.829674340Z" level=info msg="StartContainer for \"968b302ecbea4a477938369cffc4892c9435135d364ef7e02f57a5b0533e603f\" returns successfully" Sep 12 17:15:04.885764 containerd[2157]: time="2025-09-12T17:15:04.885568096Z" level=info msg="shim disconnected" id=968b302ecbea4a477938369cffc4892c9435135d364ef7e02f57a5b0533e603f namespace=k8s.io Sep 12 17:15:04.887163 containerd[2157]: time="2025-09-12T17:15:04.886361608Z" level=warning msg="cleaning up after shim disconnected" id=968b302ecbea4a477938369cffc4892c9435135d364ef7e02f57a5b0533e603f namespace=k8s.io Sep 12 17:15:04.887163 containerd[2157]: time="2025-09-12T17:15:04.886445968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:05.709465 containerd[2157]: time="2025-09-12T17:15:05.707760748Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:15:05.762830 containerd[2157]: time="2025-09-12T17:15:05.762509861Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5\"" Sep 12 17:15:05.783015 containerd[2157]: time="2025-09-12T17:15:05.776259149Z" level=info msg="StartContainer for \"7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5\"" Sep 12 17:15:05.949575 containerd[2157]: time="2025-09-12T17:15:05.948808817Z" level=info msg="StartContainer for \"7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5\" returns successfully" Sep 12 17:15:05.961223 kubelet[3597]: I0912 17:15:05.960664 3597 setters.go:600] "Node became not ready" node="ip-172-31-18-149" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:15:05Z","lastTransitionTime":"2025-09-12T17:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:15:06.028276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5-rootfs.mount: Deactivated successfully. Sep 12 17:15:06.035284 containerd[2157]: time="2025-09-12T17:15:06.035176142Z" level=info msg="shim disconnected" id=7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5 namespace=k8s.io Sep 12 17:15:06.035497 containerd[2157]: time="2025-09-12T17:15:06.035281262Z" level=warning msg="cleaning up after shim disconnected" id=7e91f08ba6408b5c240e6271cb424331db611473efe38ce5a2dd8bf1db84e1c5 namespace=k8s.io Sep 12 17:15:06.035497 containerd[2157]: time="2025-09-12T17:15:06.035328938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:06.718204 containerd[2157]: time="2025-09-12T17:15:06.718090853Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:15:06.752289 containerd[2157]: time="2025-09-12T17:15:06.751486157Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e\"" Sep 12 17:15:06.754600 containerd[2157]: time="2025-09-12T17:15:06.754505681Z" level=info msg="StartContainer for \"73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e\"" Sep 12 17:15:06.861159 containerd[2157]: time="2025-09-12T17:15:06.859936698Z" level=info msg="StartContainer for \"73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e\" returns successfully" Sep 12 17:15:06.903392 containerd[2157]: time="2025-09-12T17:15:06.903167502Z" level=info msg="shim disconnected" id=73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e namespace=k8s.io Sep 12 17:15:06.903392 containerd[2157]: time="2025-09-12T17:15:06.903266346Z" level=warning msg="cleaning up after shim disconnected" id=73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e namespace=k8s.io Sep 12 17:15:06.903392 containerd[2157]: time="2025-09-12T17:15:06.903311790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:07.027086 systemd[1]: run-containerd-runc-k8s.io-73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e-runc.0GrAM5.mount: Deactivated successfully. Sep 12 17:15:07.027384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73331f571e94542872b65dbe554f4be95f5bcaba55ad401d6b6913fa6a1aeb8e-rootfs.mount: Deactivated successfully. Sep 12 17:15:07.723531 containerd[2157]: time="2025-09-12T17:15:07.723215946Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:15:07.767260 containerd[2157]: time="2025-09-12T17:15:07.763419690Z" level=info msg="CreateContainer within sandbox \"16acd5da830dae4403805b716a0a621916076216dfaf4f977942e9ff168efd54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9362dbc224141cfb822368a0722ba9f02c7bdbe52cc951c8e11bad8f1a875d68\"" Sep 12 17:15:07.772826 containerd[2157]: time="2025-09-12T17:15:07.772774771Z" level=info msg="StartContainer for \"9362dbc224141cfb822368a0722ba9f02c7bdbe52cc951c8e11bad8f1a875d68\"" Sep 12 17:15:07.963301 containerd[2157]: time="2025-09-12T17:15:07.963226735Z" level=info msg="StartContainer for \"9362dbc224141cfb822368a0722ba9f02c7bdbe52cc951c8e11bad8f1a875d68\" returns successfully" Sep 12 17:15:08.779868 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:15:12.942958 systemd-networkd[1702]: lxc_health: Link UP Sep 12 17:15:12.959532 systemd-networkd[1702]: lxc_health: Gained carrier Sep 12 17:15:12.962036 (udev-worker)[6383]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:15:13.912846 kubelet[3597]: I0912 17:15:13.912737 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z5fkh" podStartSLOduration=11.912712849 podStartE2EDuration="11.912712849s" podCreationTimestamp="2025-09-12 17:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:15:08.771191491 +0000 UTC m=+146.009544498" watchObservedRunningTime="2025-09-12 17:15:13.912712849 +0000 UTC m=+151.151065820" Sep 12 17:15:15.022154 systemd-networkd[1702]: lxc_health: Gained IPv6LL Sep 12 17:15:16.679613 systemd[1]: run-containerd-runc-k8s.io-9362dbc224141cfb822368a0722ba9f02c7bdbe52cc951c8e11bad8f1a875d68-runc.pmygPJ.mount: Deactivated successfully. Sep 12 17:15:17.947468 ntpd[2103]: Listen normally on 13 lxc_health [fe80::9c45:ecff:fef9:c93d%14]:123 Sep 12 17:15:17.948101 ntpd[2103]: 12 Sep 17:15:17 ntpd[2103]: Listen normally on 13 lxc_health [fe80::9c45:ecff:fef9:c93d%14]:123 Sep 12 17:15:19.090675 sshd[5537]: pam_unix(sshd:session): session closed for user core Sep 12 17:15:19.100582 systemd[1]: sshd@28-172.31.18.149:22-147.75.109.163:57160.service: Deactivated successfully. Sep 12 17:15:19.113260 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:15:19.115694 systemd-logind[2128]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:15:19.119619 systemd-logind[2128]: Removed session 29. Sep 12 17:15:33.984630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33-rootfs.mount: Deactivated successfully. Sep 12 17:15:33.999223 containerd[2157]: time="2025-09-12T17:15:33.998676153Z" level=info msg="shim disconnected" id=617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33 namespace=k8s.io Sep 12 17:15:33.999223 containerd[2157]: time="2025-09-12T17:15:33.998757513Z" level=warning msg="cleaning up after shim disconnected" id=617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33 namespace=k8s.io Sep 12 17:15:33.999223 containerd[2157]: time="2025-09-12T17:15:33.998778141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:34.814127 kubelet[3597]: I0912 17:15:34.813722 3597 scope.go:117] "RemoveContainer" containerID="617a257b555e4537b87fa8fc0df1277de95e6b8e2bbb900979dfd419f14f2a33" Sep 12 17:15:34.817790 containerd[2157]: time="2025-09-12T17:15:34.817532181Z" level=info msg="CreateContainer within sandbox \"689dd00779522d26d5e7e7f870dcea370368d9cfea639f87d8228db919c59371\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:15:34.846375 containerd[2157]: time="2025-09-12T17:15:34.846199377Z" level=info msg="CreateContainer within sandbox \"689dd00779522d26d5e7e7f870dcea370368d9cfea639f87d8228db919c59371\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e2c7c763e4874cba4e26a44eddf9e171bf40e9d2f1dd23ec8f5f3dd5346a08aa\"" Sep 12 17:15:34.848844 containerd[2157]: time="2025-09-12T17:15:34.846909657Z" level=info msg="StartContainer for \"e2c7c763e4874cba4e26a44eddf9e171bf40e9d2f1dd23ec8f5f3dd5346a08aa\"" Sep 12 17:15:34.970536 containerd[2157]: time="2025-09-12T17:15:34.970458898Z" level=info msg="StartContainer for \"e2c7c763e4874cba4e26a44eddf9e171bf40e9d2f1dd23ec8f5f3dd5346a08aa\" returns successfully" Sep 12 17:15:36.466030 kubelet[3597]: E0912 17:15:36.465762 3597 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-149?timeout=10s\": context deadline exceeded" Sep 12 17:15:39.054691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06-rootfs.mount: Deactivated successfully. Sep 12 17:15:39.066041 containerd[2157]: time="2025-09-12T17:15:39.065821714Z" level=info msg="shim disconnected" id=d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06 namespace=k8s.io Sep 12 17:15:39.066041 containerd[2157]: time="2025-09-12T17:15:39.065894362Z" level=warning msg="cleaning up after shim disconnected" id=d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06 namespace=k8s.io Sep 12 17:15:39.066041 containerd[2157]: time="2025-09-12T17:15:39.065915686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:15:39.831536 kubelet[3597]: I0912 17:15:39.831079 3597 scope.go:117] "RemoveContainer" containerID="d01669c71468e8c52b9e4547e84b598b8e9d7cc9a514df3e7bce73a16664ca06" Sep 12 17:15:39.835380 containerd[2157]: time="2025-09-12T17:15:39.834957590Z" level=info msg="CreateContainer within sandbox \"90ed9765786c6fe7491c580bbab33dafb4c7aaf6ce35d88d53fca32355043a5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:15:39.865903 containerd[2157]: time="2025-09-12T17:15:39.865823954Z" level=info msg="CreateContainer within sandbox \"90ed9765786c6fe7491c580bbab33dafb4c7aaf6ce35d88d53fca32355043a5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a23e85103df8a9d14dc328a9716bcd10fc1a11b50b5e00c8023d94625ff1a0ec\"" Sep 12 17:15:39.868002 containerd[2157]: time="2025-09-12T17:15:39.866607614Z" level=info msg="StartContainer for \"a23e85103df8a9d14dc328a9716bcd10fc1a11b50b5e00c8023d94625ff1a0ec\"" Sep 12 17:15:39.999575 containerd[2157]: time="2025-09-12T17:15:39.998946279Z" level=info msg="StartContainer for \"a23e85103df8a9d14dc328a9716bcd10fc1a11b50b5e00c8023d94625ff1a0ec\" returns successfully" Sep 12 17:15:42.959232 containerd[2157]: time="2025-09-12T17:15:42.959184341Z" level=info msg="StopPodSandbox for \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\"" Sep 12 17:15:42.960127 containerd[2157]: time="2025-09-12T17:15:42.959942633Z" level=info msg="TearDown network for sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" successfully" Sep 12 17:15:42.960127 containerd[2157]: time="2025-09-12T17:15:42.960005285Z" level=info msg="StopPodSandbox for \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" returns successfully" Sep 12 17:15:42.960833 containerd[2157]: time="2025-09-12T17:15:42.960766565Z" level=info msg="RemovePodSandbox for \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\"" Sep 12 17:15:42.960833 containerd[2157]: time="2025-09-12T17:15:42.960828581Z" level=info msg="Forcibly stopping sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\"" Sep 12 17:15:42.961023 containerd[2157]: time="2025-09-12T17:15:42.960930413Z" level=info msg="TearDown network for sandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" successfully" Sep 12 17:15:42.967192 containerd[2157]: time="2025-09-12T17:15:42.967114073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:42.967351 containerd[2157]: time="2025-09-12T17:15:42.967216613Z" level=info msg="RemovePodSandbox \"db6ccc7063cf2f3bc7b890ac98e51266df95de0dea0f82ec978d6bcaf457516e\" returns successfully" Sep 12 17:15:42.968592 containerd[2157]: time="2025-09-12T17:15:42.968101169Z" level=info msg="StopPodSandbox for \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\"" Sep 12 17:15:42.968592 containerd[2157]: time="2025-09-12T17:15:42.968231285Z" level=info msg="TearDown network for sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" successfully" Sep 12 17:15:42.968592 containerd[2157]: time="2025-09-12T17:15:42.968254085Z" level=info msg="StopPodSandbox for \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" returns successfully" Sep 12 17:15:42.968858 containerd[2157]: time="2025-09-12T17:15:42.968762189Z" level=info msg="RemovePodSandbox for \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\"" Sep 12 17:15:42.968858 containerd[2157]: time="2025-09-12T17:15:42.968817041Z" level=info msg="Forcibly stopping sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\"" Sep 12 17:15:42.968987 containerd[2157]: time="2025-09-12T17:15:42.968920157Z" level=info msg="TearDown network for sandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" successfully" Sep 12 17:15:42.974828 containerd[2157]: time="2025-09-12T17:15:42.974756537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:15:42.974996 containerd[2157]: time="2025-09-12T17:15:42.974842253Z" level=info msg="RemovePodSandbox \"6bdde67de39f306712a3cde1b5c08b035e80f5b5018f5eb9ecc8a3280135b83b\" returns successfully"